CN115695889B - Display device and floating window display method - Google Patents

Display device and floating window display method Download PDF

Info

Publication number
CN115695889B
CN115695889B CN202211210716.5A CN202211210716A CN115695889B CN 115695889 B CN115695889 B CN 115695889B CN 202211210716 A CN202211210716 A CN 202211210716A CN 115695889 B CN115695889 B CN 115695889B
Authority
CN
China
Prior art keywords
data
content data
background content
video
floating window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211210716.5A
Other languages
Chinese (zh)
Other versions
CN115695889A (en
Inventor
张宏波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Juhaokan Technology Co Ltd
Original Assignee
Juhaokan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Juhaokan Technology Co Ltd filed Critical Juhaokan Technology Co Ltd
Priority to CN202211210716.5A priority Critical patent/CN115695889B/en
Publication of CN115695889A publication Critical patent/CN115695889A/en
Application granted granted Critical
Publication of CN115695889B publication Critical patent/CN115695889B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Controls And Circuits For Display Device (AREA)

Abstract

The disclosure relates to a display device and a floating window display method, which are applied to the field of video processing, and can increase the display form of a floating window area of a video playing interface and improve the display flexibility of the floating window. The display device includes: a display; a user input interface configured to: receiving user input operation for arousing a floating window under the condition of displaying a video playing interface; a controller configured to: responding to the user input operation, and displaying a floating window area on the video playing interface; in the process of playing the video on the video playing interface, the background content data of the floating window area is matched with the currently played video picture data in real time, and the transparency of the background content data is smaller than that of the video picture data.

Description

Display device and floating window display method
Technical Field
Embodiments of the present application relate to video processing techniques. And more particularly, to a display apparatus and a floating window display method.
Background
In the process of playing the video by the video application program, a user can call a floating window area on a video playing interface of the display device through key operation or touch screen operation of a remote controller, so that the video playing interface comprises the video area and the floating window area. Wherein the floating window region includes background content data, foreground content data (including still image data and/or text description data). At present, background content data of a floating window area is usually a fixed preset background template, the display form is single, and the user experience is poor.
Disclosure of Invention
In order to solve the above technical problems or at least partially solve the above technical problems, an embodiment of the present application provides a display device and a floating window display method.
In a first aspect, an embodiment of the present application provides a display apparatus, including:
A display;
A user input interface configured to: receiving user input operation for arousing a floating window under the condition of displaying a video playing interface;
A controller configured to: responding to the user input operation, and displaying a floating window area on the video playing interface;
In the process of playing the video on the video playing interface, the background content data of the floating window area is matched with the currently played video picture data in real time, and the transparency of the background content data is smaller than that of the video picture data.
In a second aspect, an embodiment of the present application provides a floating window display method, including:
Receiving user input operation for arousing a floating window under the condition of displaying a video playing interface;
Responding to the user input operation, and displaying a floating window area on the video playing interface;
In the process of playing the video on the video playing interface, the background content data of the floating window area is matched with the currently played video picture data in real time, and the transparency of the background content data is smaller than that of the video picture data.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, including: the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the floating window display method as shown in the second aspect.
In a fourth aspect, embodiments of the present application provide a computer program product comprising: the computer program product, when run on a computer, causes the computer to implement the floating window display method as shown in the second aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: in the embodiment of the application, under the condition of displaying a video playing interface, receiving user input operation for calling a floating window; responding to the user input operation, and displaying a floating window area on the video playing interface; in the process of playing the video on the video playing interface, the background content data of the floating window area is matched with the currently played video picture data in real time, and the transparency of the background content data is smaller than that of the video picture data. Therefore, the background content data of the floating window area is matched with the currently played video picture data in real time, so that the background data content of the floating window area can be changed along with the change of the played video picture data in real time, the transparency of the background content data is smaller than that of the video picture data, the background content data of the floating window area has the effect of blurring (similar to translucency) instead of always displaying a fixed preset background template, the display form of the floating window area is increased, and the display flexibility of the floating window area is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the implementation of the related art, the drawings that are required for the embodiments or the related art description will be briefly described, and it is apparent that the drawings in the following description are some embodiments of the present application and that other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
FIG. 1 illustrates an operational scenario between a display device and a control device according to some embodiments;
Fig. 2 shows a hardware configuration block diagram of the control apparatus 100 according to some embodiments;
fig. 3 illustrates a hardware configuration block diagram of a display device 200 according to some embodiments;
FIG. 4 illustrates one of the flow diagrams of a floating window display method according to some embodiments;
FIG. 5A illustrates one of the interface schematics of a floating window display method according to some embodiments;
FIG. 5B illustrates a second interface schematic of a floating window display method in accordance with some embodiments;
FIG. 6 illustrates a second flow diagram of a floating window display method in accordance with some embodiments;
FIG. 7 illustrates a third interface schematic of a floating window display method in accordance with some embodiments;
FIG. 8 illustrates a third flow diagram of a floating window display method in accordance with some embodiments;
FIG. 9 illustrates a fourth flow diagram of a floating window display method in accordance with some embodiments;
fig. 10 illustrates a schematic diagram of a structure of a video encoding stream according to some embodiments;
FIG. 11 illustrates a fifth flow diagram of a floating window display method in accordance with some embodiments;
FIG. 12 illustrates a sixth flow diagram of a floating window display method, according to some embodiments;
FIG. 13 illustrates a seventh flow diagram of a floating window display method in accordance with some embodiments;
FIG. 14 illustrates an eighth flow diagram of a floating window display method in accordance with some embodiments;
fig. 15 illustrates a ninth flow diagram of a floating window display method according to some embodiments.
Detailed Description
For the purposes of making the objects and embodiments of the present application more apparent, an exemplary embodiment of the present application will be described in detail below with reference to the accompanying drawings in which exemplary embodiments of the present application are illustrated, it being apparent that the exemplary embodiments described are only some, but not all, of the embodiments of the present application.
It should be noted that the brief description of the terminology in the present application is for the purpose of facilitating understanding of the embodiments described below only and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms first, second, third and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar or similar objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements explicitly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The display device provided by the embodiment of the application can have various implementation forms, for example, a television, an intelligent television, a laser projection device, a display (monitor), an electronic whiteboard (electronic bulletin board), an electronic desktop (electronic table), a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device and the like.
Fig. 1 is a schematic diagram of an operation scenario between a display device and a control device according to an embodiment, wherein the control device includes a smart device or a control apparatus. As shown in fig. 1, a user may operate the display device 200 through the smart device 300 or the control apparatus 100.
In some embodiments, the control apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes infrared protocol communication or bluetooth protocol communication, and other short-range communication modes, and the display device 200 is controlled by a wireless or wired mode. The user may control the display device 200 by inputting user instructions through keys on a remote control, voice input, control panel input, etc.
In some embodiments, the display device 200 may also be controlled using a smart device 300 (e.g., mobile terminal, tablet, computer, notebook, etc.). For example, the display device 200 is controlled using an application running on a smart device.
In some embodiments, the display device may receive instructions not using the smart device or control apparatus described above, but rather receive control of the user by touch or gesture, or the like.
In some embodiments, the display device 200 may also perform control in a manner other than the control apparatus 100 and the smart device 300, for example, the voice command control of the user may be directly received through a module configured inside the display device 200 device for acquiring voice commands, or the voice command control of the user may be received through a voice control device configured outside the display device 200 device.
In some embodiments, the display device 200 is also in data communication with a server 400. The display device 200 may be permitted to make communication connections via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display device 200. The server 400 may be a cluster, or may be multiple clusters, and may include one or more types of servers.
Fig. 2 exemplarily shows a block diagram of a configuration of the control apparatus 100 in accordance with an exemplary embodiment. As shown in fig. 2, the control device 100 includes a controller 110, a communication interface 130, a user input/output interface 140, an external memory, and a power supply. The control apparatus 100 may receive an input operation instruction of a user and convert the operation instruction into an instruction recognizable and responsive to the display device 200, and function as an interaction between the user and the display device 200.
As shown in fig. 3, the display apparatus 200 includes at least one of a modem 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, a user interface 280, an external memory, and a power supply.
In some embodiments the controller includes a processor, a video processor, an audio processor, a graphics processor, RAM, ROM, a first interface for input/output to an nth interface.
The display 260 includes a display screen component for presenting a picture, and a driving component for driving an image display, a component for receiving an image signal from the controller output, displaying video content, image content, and a menu manipulation interface, and a user manipulation UI interface.
The display 260 may be a liquid crystal display, an OLED display, a projection device, or a projection screen.
The communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, or other network communication protocol chip or a near field communication protocol chip, and an infrared receiver. The display apparatus 200 may establish transmission and reception of control signals and data signals with the external control device 100 or the server 400 through the communicator 220.
The user interface 280 may be used to receive control signals from the control device 100 (e.g., an infrared remote control, etc.). Or may be used to directly receive user input operation instructions and convert the operation instructions into instructions recognizable and responsive by the display device 200, which may be referred to as a user input interface.
The detector 230 is used to collect signals of the external environment or interaction with the outside. For example, detector 230 includes a light receiver, a sensor for capturing the intensity of ambient light; either the detector 230 comprises an image collector, such as a camera, which may be used to collect external environmental scenes, user attributes or user interaction gestures, or the detector 230 comprises a sound collector, such as a microphone or the like, for receiving external sounds.
The external device interface 240 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, etc. The input/output interface may be a composite input/output interface formed by a plurality of interfaces.
The modem 210 receives broadcast television signals through a wired or wireless reception manner, and demodulates audio and video signals, such as EPG data signals, from a plurality of wireless or wired broadcast television signals.
In some embodiments, the controller 250 and the modem 210 may be located in separate devices, i.e., the modem 210 may also be located in an external device to the main device in which the controller 250 is located, such as an external set-top box or the like.
The controller 250 controls the operation of the display device and responds to the user's operations through various software control programs stored on a memory (internal memory or external memory). The controller 250 controls the overall operation of the display apparatus 200. For example: in response to receiving a user command to select a UI object to be displayed on the display 260, the controller 250 may perform an operation related to the object selected by the user command.
In some embodiments, the controller includes at least one of a central processing unit (Central Processing Unit, CPU), a video processor, an audio processor, a graphics processor (Graphics Processing Unit, GPU), and a random access Memory (Random Access Memory, RAM), a Read-Only Memory (ROM), a first interface to an nth interface for input/output, a communication Bus (Bus), and the like.
The RAM is also called as a main memory and is an internal memory for directly exchanging data with the controller. It can be read and written at any time (except when refreshed) and is fast, often as a temporary data storage medium for an operating system or other program in operation. The biggest difference from ROM is the volatility of the data, i.e. the stored data will be lost upon power down. RAM is used in computer and digital systems to temporarily store programs, data, and intermediate results. ROM operates in a non-destructive read mode, and only information which cannot be written can be read. The information is fixed once written, and even if the power supply is turned off, the information is not lost, so the information is also called a fixed memory.
The user may input a user command through a Graphical User Interface (GUI) displayed on the display 260, and the user input interface receives the user input command through the Graphical User Interface (GUI). Or the user may input the user command by inputting a specific sound or gesture, the user input interface recognizes the sound or gesture through the sensor, and receives the user input command.
A "user interface" is a media interface for interaction and exchange of information between an application or operating system and a user, which enables conversion between an internal form of information and a user-acceptable form. A commonly used presentation form of a user interface is a graphical user interface (Graphic User Interface, GUI), which refers to a graphically displayed user interface that is related to computer operations. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the display device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
The embodiment of the application provides a display device and a floating window display method, wherein the display device can realize the floating window display method provided by the embodiment of the application or a functional module or a functional entity in the display device can realize the floating window display method provided by the embodiment of the application. The display device includes: the user input interface, controller and display correspond to the user interface 280, controller 250 and display 260 described above in fig. 3, respectively.
In some embodiments of the present application, there is provided a display apparatus including: a display; the video playing interface is used for displaying a video playing interface; the user input interface is used for receiving user input operation for calling the floating window under the condition of displaying the video playing interface; the controller is used for responding to the user input operation and displaying a floating window area on the video playing interface; in the process of playing the video on the video playing interface, the background content data of the floating window area is matched with the currently played video picture data in real time, and the transparency of the background content data is smaller than that of the video picture data.
In the embodiment of the application, the background content data of the floating window area is matched with the currently played video picture data in real time, so that the background data content of the floating window area can be changed along with the change of the played video picture data in real time, the transparency of the background content data is smaller than that of the video picture data, the background content data of the floating window area has the effect of blurring (similar to translucency) instead of always displaying a fixed preset background template, the display form of the floating window area is increased, and the display flexibility of the floating window area is improved.
In some embodiments of the present application, the controller is specifically configured to obtain, in response to the user input operation, position information of the floating window area in the video playing interface, and foreground content data in the floating window area; based on the position information of the floating window area on the video playing interface, the following steps are executed for each frame of video coding stream: dividing a frame of video coding stream (original video data) into coding data of a video area and coding data of a floating window area; decoding the coded data of the video area to obtain video picture data of the video area; decoding the encoded data of the floating window area to obtain video picture data of the floating window area, performing Gaussian filtering on the video picture data of the floating window area to obtain background content data of the floating window area after filtering, and covering foreground content data on a foreground layer of the background content data of the floating window area after filtering; and synthesizing the video picture data of the video area, the background content data and the foreground content data of the superimposed floating window area to generate a frame of image comprising the video area and the floating window area, and displaying the frame of image on a video playing interface. Therefore, a series of processing operations such as decoding- > filtering- > layer overlapping and the like are performed on the original video picture data of each frame, so that the background content data of the floating window area is matched with the currently played video picture data in real time in the process of playing the video on the video playing interface, the matching degree of the background content data of the floating window area and the currently played video picture data is higher, and the transparency of the background content data is smaller than that of the video picture data.
In some embodiments of the present application, the controller is specifically configured to obtain, in response to the user input operation, position information of a floating window area in a video playing interface obtained by a current frame video encoding stream, and foreground content data in the floating window area; based on the position information, acquiring target coding data corresponding to a video area and background coding data corresponding to the floating window area from a current frame coding video stream; decoding the target coded data to obtain current frame video picture data; when the current frame video coding stream is not an integer multiple of a preset frame interval, determining the cached previous frame background content data as current frame background content data, and generating a current frame image based on the current frame video picture data, the current frame background content data and the foreground content data; and displaying the current frame image on the video playing interface. Therefore, the processing overhead of most frame images in the video playing process is reduced, the processing efficiency is improved, the picture blocking phenomenon is reduced, and the background content data of the floating window area can be matched with the video picture data to a certain extent although the real-time synchronization between the background content data of the floating window area and the original video picture data cannot be realized.
In some embodiments of the present application, the controller is specifically configured to obtain, in response to the user input operation, position information of a floating window area in a video playing interface, and foreground content data in the floating window area; based on the position information, acquiring target coding data corresponding to a video area and background coding data corresponding to the floating window area from a current frame coding video stream; decoding the target coded data to obtain current frame video picture data; acquiring current frame motion compensation data from the background coding data; based on the current frame motion compensation data, performing motion compensation processing on the cached previous frame background content data corresponding to the floating window area to obtain the current frame background content data of the floating window area; generating a current frame image based on the current frame video picture data, the current frame background content data, and the foreground content data; and displaying the current frame image on the video playing interface.
The position information is used for indicating the position of the floating window area in the video playing interface. The video area is an area except for a floating window area in the video playing interface and is used for displaying video pictures in the video coding stream.
In the embodiment of the application, the current frame motion compensation data can be obtained from the background coding data; based on the current frame motion compensation data, motion compensation processing is performed on the buffered previous frame background content data corresponding to the floating window area, so that the current frame background content data of the floating window area is obtained, a series of processing operations such as decoding processing, filtering processing, layer stacking processing and the like are not needed to be performed on each frame of original video picture data (data corresponding to the floating window area), the calculation cost can be reduced to a certain extent, the time consumption for processing each frame of image is reduced, the processing efficiency is improved, the occurrence of picture blocking phenomenon is reduced, and the user experience is improved.
In some embodiments of the present application, the controller is further configured to determine, before obtaining motion compensation data from the background encoded data, whether the current frame encoded video stream is a key frame; the controller is specifically configured to obtain the current frame motion compensation data from the background encoded data if the current frame encoded video stream is not a key frame. That is, when a frame of video encoded stream is not a key frame, after acquiring target encoded data corresponding to a video region and background encoded data corresponding to the floating window region from a current frame of encoded video stream based on the position information, current frame motion compensation data may be acquired from the background encoded data; based on the current frame motion compensation data, motion compensation processing is performed on the cached previous frame background content data corresponding to the floating window area to obtain the current frame background content data of the floating window area, so that a series of processing operations such as decoding processing, filtering processing, layer stacking processing and the like are not needed to be performed on original video picture data (data corresponding to the floating window area) which is not a key frame, the current frame background content data can be obtained only by performing motion compensation processing on the cached previous frame background content data, the calculation cost can be reduced to a certain extent, the time consumption for processing each frame of image is reduced, the processing efficiency is improved, the occurrence of picture blocking phenomenon is reduced, and the user experience is improved.
It can be understood that, because the key frame image in the video has a larger change compared with the previous frame image and the key frame image is intra-frame coded, for the case that the current frame coded video stream is the key frame image, the background coded data does not include motion compensation data, so that motion compensation processing cannot be performed on the buffered previous frame background content data corresponding to the floating window region based on the current frame motion compensation data to obtain the current frame background content data of the floating window region, and even if the motion compensation data of the adjacent frame image of the key frame image is combined, the motion compensation data corresponding to the key frame image is obtained by calculation, the motion compensation processing is performed on the buffered previous frame background content data corresponding to the floating window region to obtain the current frame background content data of the floating window region, and the current frame background content data also has a larger phase difference with the background content data corresponding to the actual current frame image, so that the background content data of the floating window region cannot be matched with the currently played video picture data in real time.
Therefore, in the embodiment of the application, for the background coding data in the key frame coding video stream, a processing mode of a series of processing operations such as decoding processing, filtering processing, layer stacking processing and the like can be adopted to acquire the background data content of the current frame.
In some embodiments of the present application, the controller is further configured to decode the background encoded data to obtain background video picture data after determining whether the current frame encoded video stream is a key frame, based on the current frame video picture data and the current frame background content data, and the foreground content data, before generating the current frame image, if the current frame encoded video stream is a key frame; and filtering the background video picture data to obtain the background content data of the current frame. Therefore, the condition that the current frame coded video stream is a key frame image can be ensured, the background content data of the floating window area can be matched with the currently played video picture data in real time, the background content data of the current frame is stored, and the corresponding background content data of the continuous multi-frame non-key frame coded video stream after the current frame can be ensured to be matched with the played corresponding video picture in real time.
Since all the floating window areas are one kind of blurring degree, and different parts of the floating window areas have different blurring degrees, for example, foreground contents of the floating window areas comprise static picture data and text description data, and the blurring degree of background contents of the areas corresponding to the text description data is greater than that of background contents of other areas. Therefore, filtering processing is performed to different degrees on the background content data of different areas of the floating window area, so that the layered display effect of the background content data of the floating window area is realized.
In some embodiments of the present application, the background encoded data includes first encoded data corresponding to a first area and second encoded data corresponding to a second area, where the first area is an area in the floating window area where the foreground content data is not displayed, and the second area is an area in the floating window area where the foreground content data is corresponding to; the current frame background content data comprises first background content data and second background content data; the controller is used for decoding the first coded data to obtain first video picture data; performing first filtering processing on the first video picture data to obtain first background content data; decoding the second encoded data to obtain second video picture data; and performing second filtering processing on the second video picture data to obtain second background content data. That is, the degree of blurring is different between the region (first region) in which the foreground content data is not displayed and the region (second region) in which the foreground content data corresponds, and the degree of blurring is different between the first region and the second region by performing different degrees of filtering processing on the first encoded data of the first region and the second encoded data of the second region. The filtering parameters of the first filtering process are smaller than those of the second filtering process, so that the blurring degree of the first background content data is smaller than that of the second background content data.
Video is composed of a frame of pictures that, after passing through a video encoder, are encoded into one or more slices, a frame of pictures may contain one or more slices, each slice containing an integer number of macroblocks (macroblocks), i.e., each slice contains at least one Macroblock, and at most one slice contains macroblocks of the entire picture. Wherein the macroblock types include I-type macroblocks, P-type macroblocks, and B-type macroblocks (hereinafter, P-type macroblocks and B-type macroblocks are collectively referred to as non-I-type macroblocks). All macro blocks included in a key frame (I frame) image are I type macro blocks, and macro blocks included in a non-key frame (P frame or B frame) image can be all non-I type macro blocks, or can be partly I type macro blocks and partly non-I type macro blocks.
For an I-type macroblock, the background coding data does not include motion compensation data, so motion compensation processing cannot be performed on the buffered previous frame background content data corresponding to the floating window area based on the current frame motion compensation data in general, so that the current frame background content data of the floating window area is obtained, decoding processing can be performed on the I-type macroblock in a non-key frame image, filtering processing can be performed on the I-type macroblock, and a series of processing operations such as layer-by-layer overlapping processing are performed to obtain the current macroblock background content data, so that the current frame background content data including the current macroblock background content data is less in difference with the background content data corresponding to the current frame image, and further the background content data of the floating window area can be matched with the currently played video picture data in real time.
However, since the proportion of the macro block of the type I in the non-key frame image is smaller, the proportion of the macro block of the type I is larger, so that the motion compensation data corresponding to the macro block image of the type I can be obtained by combining the motion compensation data of the adjacent macro block of the type I in the non-key frame image, the motion compensation processing is carried out on the background content data of the last macro block corresponding to the floating window area in the buffer memory, the background content data of the current macro block is obtained, the background content data of the current frame including the background content data of the current macro block is also smaller than the background content data corresponding to the actual current frame image, and the background content data of the floating window area can be matched with the currently played video picture data in real time.
In some embodiments of the application, the background encoded data comprises a plurality of macroblock data; the controller is further configured to determine, for each macroblock data, whether the current macroblock data is an I-type macroblock if the current frame encoded video stream is not a key frame before the motion compensation data is obtained from the background encoded data; the controller is specifically configured to obtain current macroblock motion compensation data from the current macroblock data when the current macroblock data is not an I-type macroblock, where the current macroblock motion compensation data belongs to the current frame motion compensation data; and determining the background content data of the previous frame macro block matched with the current macro block data from the background content data of the previous frame based on the motion compensation data of the current macro block, and performing motion compensation processing on the background content data of the previous frame macro block to obtain the background content data of the current macro block, wherein the background content data of the current macro block belongs to the background content data of the current frame.
In the embodiment of the application, for the current macro block in the current frame image not being a non-I type macro block, after acquiring target coding data corresponding to a video area and background coding data corresponding to a floating window area from the current frame coding video stream based on the position information, the current macro block motion compensation data can be acquired from the current macro block data; based on the current macro block motion compensation data, the previous frame macro block background content data matched with the current macro block data is determined from the previous frame background content data, motion compensation processing is carried out on the previous frame macro block background content data to obtain current macro block background content data (the current macro block background content data belongs to the current frame background content data), a series of processing operations such as decoding processing, filtering processing, picture layer overlapping processing and the like are not needed to be carried out on original macro block data which are not I-type macro blocks, and the current macro block background content data can be obtained only by carrying out motion compensation processing on the cached previous frame macro block background content data.
In some embodiments of the present application, the controller is further configured to decode the current macroblock data to obtain current macroblock video picture data when the current macroblock data is an I-type macroblock after determining whether the current macroblock data is an I-type macroblock; and filtering the current macro block video picture data to obtain current macro block background content data, wherein the current macro block background content data belongs to the current frame background content data.
In some embodiments of the present application, the controller is further configured to obtain, after determining whether the current macroblock data is an I-type macroblock, supplementary motion compensation data corresponding to a current macroblock from at least one macroblock data adjacent to the current macroblock data, where the supplementary motion compensation data belongs to the current frame motion compensation data, if the current macroblock data is an I-type macroblock; and determining the background content data of the previous frame of macro block matched with the current macro block data from the background content data of the previous frame based on the supplementary motion compensation data, and performing motion compensation processing on the background content data of the previous frame of macro block to obtain the background content data of the current macro block, wherein the background content data of the current macro block belongs to the background content data of the current frame.
In the embodiment of the application, under the condition that the current macro block data is an I type macro block, whether the current macro block data is firstly decoded and then filtered, and then a series of processing operations such as layer stacking processing and the like are carried out to obtain the background content data of the current macro block, or the motion compensation data corresponding to the I type macro block image is obtained by combining the motion compensation data of the adjacent macro block of the I type macro block in a non-key frame image, the motion compensation processing is carried out on the background content data of the last macro block corresponding to the floating window area in a buffer memory to obtain the background content data of the current macro block, and the background content data of the current frame including the background content data of the current macro block is smaller than the background content data corresponding to the actual current frame image, so that the background content data of the floating window area can be matched with the currently played video picture data in real time.
In some embodiments of the application, the controller is further configured to store the current frame background content data. It will be appreciated that whether the current frame encoded video stream is a key frame or a non-key frame, the current frame background content data needs to be saved in order to obtain the next frame background content data corresponding to the next frame encoded video stream based on the current frame background content data.
It should be noted that, the specific description of the display device provided in the embodiment of the present application may refer to the following description of the floating window display method, and may achieve the same or similar technical effects, which are not described herein again.
For more detailed description of the present solution, the following description will be made with reference to fig. 4 to 15 by way of example, and it will be understood that the flowcharts in fig. 4 to 15 may include more steps or fewer steps when actually implemented, and the order of these steps may also be different, so as to enable the floating window display method provided in the embodiment of the present application. The floating window display method is realized by the display device, or by a functional module or a functional entity capable of realizing the floating window display method in the display device, which is not limited herein.
Fig. 4 is a flowchart illustrating steps of a floating window display method according to one or more embodiments of the present application, which may include the following S401 to S402.
S401, receiving a user input operation for calling a floating window under the condition that a video playing interface is displayed.
S402, responding to the user input operation, and displaying a floating window area on the video playing interface.
In the process of playing the video on the video playing interface, the background content data of the floating window area is matched with the currently played video picture data in real time, and the transparency of the background content data is smaller than that of the video picture data.
In the embodiment of the application, the background content data of the floating window area is matched with the currently played video picture data in real time, so that the background data content of the floating window area can be changed along with the change of the played video picture data in real time, the transparency of the background content data is smaller than that of the video picture data, the background content data of the floating window area has the effect of blurring (similar to translucency) instead of always displaying a fixed preset background template, the display form of the floating window area is increased, and the display flexibility of the floating window area is improved.
As shown in fig. 5A or fig. 5B, an exemplary video playing interface including a floating window is shown, where an area indicated by a mark "501" is a video area, the video area displays a real-time video stream picture, and an area indicated by a mark "502" is a floating window area. The floating window region in fig. 5A includes a still image and a text description, wherein the region corresponding to the text description includes background content, and the background content of the region corresponding to the text description has a blur degree different from that of the background content of the other regions. The floating window area in fig. 5B includes a plurality of still images such as still image 1, still image 2, and still image 3, and a text description corresponding to each still image is included below the still image.
Illustratively, in connection with fig. 5A, the video area is denoted as an area V1, the floating window area is denoted as an area V2, and the area V2 is again divided into an area F1 corresponding to background content that does not include foreground content, an area F2 corresponding to still image, and an area F3 corresponding to text description content. After the original video picture of V1 is covered by the floating window, the left picture area is exposed, the floating window area V2 comprises three areas of F1, F2 and F3, wherein F1 is the left floating window exposed area after F2 and F3 are scratched, and the image (background content data) of the left floating window exposed area is updated in real time along with the picture change of V2; f2, displaying a set of one or more still pictures, wherein the still image displayed in the F2 area is kept unchanged in the video playing process; f3 shows a set of one or more descriptive text areas (text descriptive content) consisting of text/icons and background base. As the frame of the video frame V2 changes, the background image of F3 changes in real time, and the font color of the text/icon changes accordingly.
As shown in fig. 6, an interaction diagram corresponding to an embodiment of the present application includes the following steps:
S601, the video client receives a first trigger operation of playing a target video.
The video client is an Application (APP) corresponding to a video playing interface, and the first triggering operation may be an operation of selecting a media card corresponding to the target video from a plurality of media cards by a user, which may be specifically determined according to an actual situation, and is not limited herein.
S602, the video client responds to a first trigger operation, and sends a video request for acquiring a real-time encoding video stream of a target video to the cloud server.
Accordingly, the cloud server receives video requests from the video clients.
And S603, the cloud server sends the real-time coded video stream of the target video to the video client based on the video request.
Accordingly, the video client receives the real-time encoded video stream from the cloud server.
S604, the video client decodes and plays the real-time coded video stream in real time.
The video client and the cloud server circularly execute the steps 602 to 604 to obtain the real-time encoded video stream, and perform real-time decoding and playing on the real-time encoded video stream.
S605, the video client receives a second triggering operation for calling the floating screen.
S606, the video client responds to the second triggering operation and sends a query request for querying the floating window display information to the cloud server.
Accordingly, the cloud server receives a query request from the video client.
S607, the cloud server sends the position information of the floating window and the foreground content data to the video client based on the query request.
Accordingly, the video client receives the position information of the floating window and the foreground content data from the cloud server.
Then, the video client and the cloud server circularly execute the following steps 608 to 610 to obtain a real-time encoded video stream, and process the real-time encoded video stream based on the position information of the floating window and the foreground content data to obtain a real-time video image including the floating window for playing.
And S608, the video client sends a video request for acquiring the real-time encoded video stream to the cloud server.
Accordingly, the cloud server receives video requests from the video clients.
S609, the cloud server sends the real-time coded video stream to the video client based on the video request.
Accordingly, the video client receives the real-time encoded video stream from the cloud server.
S610, the video client calculates and displays a current frame image comprising V1, F3 and F2 area pictures.
After entering the video playing interface, the user can use the remote controller to input or other modes such as voice input or touch screen input to execute a floating screen calling instruction, and the video client firstly queries the cloud server about the floating window information associated with the current video, wherein the floating window information comprises the page layout position and size (including an F1 area, an F2 area and an F3 area) of the floating window, and information such as a still picture to be displayed in the F2 area, text description content to be displayed in the F3 area and the like. After receiving the query return of the cloud server, the video client calculates a current frame image comprising V1, F3 and F2 area pictures according to video coding stream data received from the cloud in real time, and displays the current frame image.
In some embodiments of the present application, the step S402 may specifically be: responding to the user input operation, and acquiring the position information of a floating window area in a video playing interface and the foreground content data in the floating window area; based on the position information of the floating window area on the video playing interface, the following steps are executed for each frame of video coding stream: dividing a frame of video coding stream (original video data) into coding data of a video area and coding data of a floating window area; decoding the coded data of the video area to obtain video picture data of the video area; decoding the encoded data of the floating window area to obtain video picture data of the floating window area, performing Gaussian filtering on the video picture data of the floating window area to obtain background content data of the floating window area after filtering, and covering foreground content data on a foreground layer of the background content data of the floating window area after filtering; and synthesizing the video picture data of the video area, the background content data and the foreground content data of the superimposed floating window area to generate a frame of image comprising the video area and the floating window area, and displaying the frame of image on a video playing interface. Therefore, a series of processing operations such as decoding- > filtering- > layer overlapping and the like are performed on the original video picture data of each frame, so that the background content data of the floating window area is matched with the currently played video picture data in real time in the process of playing the video on the video playing interface, the matching degree of the background content data of the floating window area and the currently played video picture data is higher, and the transparency of the background content data is smaller than that of the video picture data.
Illustratively, four areas V1 and F1, F2 and F3 in each frame of image of the original video are processed in series, and the complete image of the current frame is displayed after the four areas are processed. First, the current frame video encoded stream of the original video is decoded (V1 and V2 are obtained, see (a) in fig. 7 and (b) in fig. 7); then, the decoded V2 is gaussian filtered to obtain V2' (see (c) in fig. 7); then, F3 is gaussian filtered (see (d) in fig. 7) to obtain F3'; then, the still image (corresponding area is F2) and the text description content (corresponding area is F3') are respectively covered on the foreground layer of V2 (see fig. 5A, the original video covered by the floating window is denoted as V2 (V2 is not foreground displayed on the video playing interface)), and finally the current frame image including the floating window is obtained.
In the embodiment of the application, the background content data of the floating window area can be displayed and kept synchronous with the original video picture data in real time by the mode.
In some embodiments of the present application, the step S402 may specifically be: responding to the user input operation, acquiring the position information of a current frame video coding stream in a video playing interface, and acquiring foreground content data in the floating window region; based on the position information, acquiring target coding data corresponding to a video area and background coding data corresponding to the floating window area from a current frame coding video stream; decoding the target coded data to obtain current frame video picture data; when the current frame video coding stream is not an integer multiple of a preset frame interval, determining the cached previous frame background content data as current frame background content data, and generating a current frame image based on the current frame video picture data, the current frame background content data and the foreground content data; and displaying the current frame image on the video playing interface.
Illustratively, the variable display timing asynchronous processing for the screen of the floating window region: setting a fixed frame interval delta f for refreshing a floating window; assuming that the frame number of the first frame of the original video is frame=0, the F1, F3, F2 regions of the floating window are refreshed (operations such as decoding, filtering, and layer stacking) only when the frame=n×Δf frame (n=0, 1,2, …), and the floating window pictures remain unchanged for the rest of time, and only the V1 region is refreshed in real-time. The method reduces the processing overhead of most frame images in the video playing process, and can not realize that the background content data of the floating window area is displayed and kept synchronous with the original video picture data in real time, but can realize that the background content data of the floating window area is matched with the video picture data to a certain extent (sometimes the situation that the picture matching is poor when the F1 area and the V1 area are displayed, the matching degree of the display picture of the F1 area and the V1 area is better in general), compared with a fixed preset background template, the user experience can be improved.
In some embodiments of the present application, as shown in fig. 8, the above S402 may be specifically implemented by the following S801 to S807.
S801, responding to the user input operation, and acquiring position information of a floating window area in a video playing interface and foreground content data in the floating window area.
Wherein the foreground content data corresponds to the still image data of the F2 area and the text description data of the F3 area.
The position information is used for indicating the position of the floating window area in the video playing interface, namely, the position of the floating window area in the video playing interface can be determined according to the position information, the area except the floating window area in the video playing interface is the video area, the video area is used for displaying video images which are not covered by the floating window in the original video images, and the video area corresponds to V1. And further, the coded data corresponding to the video area and the coded data corresponding to the floating window area in one frame of coded video stream can be determined. Therefore, based on the position information, the target encoded data corresponding to the video region and the background encoded data corresponding to the floating window region can be acquired from the current frame encoded video stream.
S802, acquiring target coding data corresponding to a video area and background coding data corresponding to the floating window area from the current frame coding video stream based on the position information.
S803, decoding the target coded data to obtain the current frame video picture data.
The current frame video picture data is a video picture to be displayed in the video area, and corresponds to the video picture to be displayed in the V1.
S804, acquiring current frame motion compensation data from the background coding data.
And S805, performing motion compensation processing on the cached background content data of the previous frame corresponding to the floating window area based on the motion compensation data of the current frame to obtain the background content data of the current frame of the floating window area.
Wherein the background content data is actually the video picture of the original video picture covered by the floating window.
The background content data of the current frame is a background content picture to be displayed in the floating window area, and corresponds to the background content pictures to be displayed in F1 and F3.
The specific motion compensation process may refer to the related art, and is not limited herein.
S806, generating a current frame image based on the current frame video frame data and the current frame background content data, and the foreground content data.
S807, displaying the current frame image on the video playing interface.
The current frame image is a real-time video image including a floating window.
The embodiment of the application can acquire the current frame motion compensation data from the background coding data; based on the current frame motion compensation data, motion compensation processing is performed on the buffered previous frame background content data corresponding to the floating window area, so that the current frame background content data of the floating window area is obtained, a series of processing operations such as decoding processing, filtering processing, layer stacking processing and the like are not needed to be performed on each frame of original video picture data (data corresponding to the floating window area), the calculation cost can be reduced to a certain extent, the time consumption for processing each frame of image is reduced, the processing efficiency is improved, the occurrence of picture blocking phenomenon is reduced, and the user experience is improved.
In video compression, each frame represents a still image, and in actual compression, various algorithms are employed to reduce the data size, with IPB being the most common. Briefly, an I-frame is a key frame, belonging to intra-frame compression. Is the same as the compression of the audio video interlace format (Audio Video Interleaved, AVI). P means forward search, B means bi-directional search, they are all based on I frames to compress data, belonging to intra-frame compression.
Wherein, I frame represents key frame, you can understand the complete reservation of this frame picture; the decoding can be completed only by the frame data (because the complete picture is contained). The P frame represents the difference between the frame and a previous key frame (or P frame), and when decoding, the difference defined by the frame is overlapped by the previously buffered frame to generate a final frame, and the P frame has no complete frame data, i.e. a difference frame, and only has the data of the difference with the frame of the previous frame. The B frame is a bidirectional difference frame, that is, the difference between the present frame and the previous and subsequent frames (specifically, the difference is more complex, there are 4 cases) is recorded in the B frame, in other words, the B frame is decoded, not only the previous buffered picture but also the subsequent picture is decoded, and the final picture is obtained by overlapping the previous and subsequent pictures with the present frame data. The B-frame compression rate is high, but the decoding is more complex, requiring more CPU.
Therefore, in the embodiment of the present application, the current frame background content data may be obtained by the methods from S804 to S805 described above in the case where the current frame encoded video stream is not a key frame, and the current frame background content data may be obtained by the methods from S809 to S810 described below in the case where the current frame encoded video stream is a key frame
In some embodiments of the present application, as shown in fig. 9 in conjunction with fig. 8, before S804, the floating window display method provided in the embodiment of the present application further includes S808 described below, where S804 may be specifically implemented by S804a described below.
S808, judging whether the current frame coded video stream is a key frame or not.
S804a, in case that the current frame encoded video stream is not a key frame, acquiring the current frame motion compensation data from the background encoded data.
In the embodiment of the application, when the video coding stream of the current frame is not a key frame, the image of the current frame is inter-frame coded, and the change is smaller than that of the image of the previous frame (or the previous frame and the next frame), so that when the video coding stream of the current frame is not the key frame, the background coding data comprise the motion compensation data of the current frame and the image of the previous frame (or the previous frame and the next frame), and the current frame motion compensation data can be obtained from the background coding data; based on the current frame motion compensation data, motion compensation processing is performed on the cached previous frame background content data corresponding to the floating window area to obtain the current frame background content data of the floating window area, so that a series of processing operations such as decoding processing, filtering processing, layer stacking processing and the like are not needed to be performed on original video picture data (data corresponding to the floating window area) which is not a key frame, the current frame background content data can be obtained only by performing motion compensation processing on the cached previous frame background content data, the calculation cost can be reduced to a certain extent, the time consumption for processing each frame of image is reduced, the processing efficiency is improved, the occurrence of picture blocking phenomenon is reduced, and the user experience is improved.
As shown in fig. 10, a video is composed of a frame of pictures, and after the frame of pictures passes through a video encoder, the frame of pictures is encoded into one or more slices (slices), and a frame of pictures may contain one or more slices, and each slice contains an integer number of macro blocks (macro blocks), that is, each slice includes at least one macro block, and at most, one slice includes a macro block of the entire picture. One macroblock consists of one luminance pixel block and two additional chrominance pixel blocks (each macroblock size is fixed 16x16 pixels in H264, while the coding unit of H265 can be chosen from the smallest 8x8 pixels to the largest 64x64 pixels). In each frame of image, several macro blocks are arranged in the form of slices, and the video coding algorithm uses macro blocks as unit, and makes macro blocks code by macro blocks to form continuous video code stream.
Wherein the macroblock types include I-type macroblocks, P-type macroblocks, and B-type macroblocks (hereinafter, P-type macroblocks and B-type macroblocks are collectively referred to as non-I-type macroblocks). All macro blocks included in a key frame (I frame) image are I type macro blocks, and macro blocks included in a non-key frame (P frame or B frame) image can be all non-I type macro blocks, or can be partly I type macro blocks and partly non-I type macro blocks.
In some embodiments of the present application, as shown in fig. 11 in conjunction with fig. 9, after S808, before S806, the method for displaying a floating window according to the embodiment of the present application further includes S809 to S809 described below.
S809, decoding the background coding data to obtain background video picture data under the condition that the current frame coding video stream is a key frame.
And S810, filtering the background video picture data to obtain the background content data of the current frame.
In the embodiment of the application, since the key frame image in the video has larger change compared with the previous frame image and the key frame image is intra-frame coded, for the case that the current frame coded video stream is the key frame image, the background coded data does not comprise motion compensation data, so that motion compensation processing cannot be performed on the cached previous frame background content data corresponding to the floating window area based on the current frame motion compensation data to obtain the current frame background content data of the floating window area, and even if the motion compensation data of the adjacent frame image of the key frame image is combined, the motion compensation data corresponding to the key frame image is obtained by calculation, the motion compensation processing is performed on the cached previous frame background content data corresponding to the floating window area to obtain the current frame background content data of the floating window area, and the current frame background content data also has larger difference with the background content data corresponding to the current frame image, so that the background content data of the floating window area cannot be matched with the video picture data which is played currently in real time. Therefore, in the embodiment of the application, for the background coding data in the key frame coding video stream, a processing mode of a series of processing operations such as decoding processing, filtering processing, layer stacking processing and the like can be adopted to acquire the background data content of the current frame. The background content data of the floating window area can be ensured to be matched with the currently played video picture data in real time.
In some embodiments of the present application, the background encoded data includes first encoded data corresponding to a first area and second encoded data corresponding to a second area, where the first area is an area in the floating window area where the foreground content data is not displayed, and the second area is an area in the floating window area where the foreground content data is corresponding to; the current frame background content data comprises first background content data and second background content data; as shown in fig. 12 in conjunction with fig. 11, the above-described S809 to S810 can be realized by specifically the following S809a, S810a, S809b, and S810 b.
S809a, performing decoding processing on the first encoded data to obtain first video picture data.
S810a, performing first filtering processing on the first video picture data to obtain first background content data.
And S809b, decoding the second encoded data to obtain second video picture data.
And S810b, performing second filtering processing on the second video picture data to obtain second background content data.
In the embodiment of the application, the blurring degree of the area (the first area) which does not display the foreground content data and the blurring degree of the area corresponding to the foreground content data are different (the second area), and the blurring degree of the first area and the blurring degree of the second area can be different by carrying out different-degree filtering processing on the first coding data of the first area and the second coding data of the second area. The filtering parameters of the first filtering process are smaller than those of the second filtering process, so that the blurring degree of the first background content data is smaller than that of the second background content data. Therefore, filtering processing is performed to different degrees on the background content data of different areas of the floating window area, so that the layered display effect of the background content data of the floating window area is realized.
Illustratively, in combination with the above example, each macroblock of the three regions V1, F3 is decoded when the current frame encoded video stream is a key frame. The decoded F1 and F3 regions are processed by the following gaussian blur functions, respectively. The gaussian function for processing the F1 region is: l 1(x,y)L1(x,y)=GaussianBlur(I1(x,y),K1(x,y)),σ1); the gaussian function for processing the F3 region is: l 3(x,y)=GaussianBlur(I3(x,y),K3(x,y)),σ3).
Wherein, L 1 (x, y) and L 3 (x, y) are images after Gaussian blur processing is carried out on the F1 and F3 areas respectively; gaussianBlur is a gaussian blur function; i 1 (x, y) and I 3 (x, y) are input image data of decoded F1 and F3 regions, respectively; k 1 (x, y) and K 3 (x, y) are input gaussian kernel sizes, the larger the size of which, the more blurred the processed image, generally K1< K3 is set, e.g., K 1(x,y)=(3,3),K3 (x, y) = (5, 5); sigma is the standard deviation in the x-direction, with a default value equal to 0.3 x ((gaussian kernel x-direction dimension-1) 0.5-1) +0.8, for example when K 1 (x, y) = (3, 3), the default value of sigma 1 = 0.3 x ((3-1) 0.5-1) +0.8 = 0.8; the data L 1 (x, y) and L 3 (x, y) after gaussian blur processing of the current frame F1, F3 region are put into the cache buffer (x, y) of the video client. And the video client superimposes and displays the obtained V1 region decoded image, the F1 and F3 region decoded Gaussian blur image, the F2 region static image and the F3 region text description content acquired from the cloud server on a screen, namely, displays the current frame image.
In some embodiments of the application, the background encoded data comprises a plurality of macroblock data; referring to fig. 12, as shown in fig. 13, before S804, the method for displaying a floating window according to the embodiment of the present application further includes S811, where S804a may be specifically implemented by S804b, and S805 may be specifically implemented by S805 a.
S811, judging whether the current macro block data is an I type macro block according to each macro block data.
S804b, if the current macroblock data is not an I-type macroblock, acquiring current macroblock motion compensation data from the current macroblock data.
Wherein the current macroblock motion compensation data belongs to the current frame motion compensation data.
S805a, determining the background content data of the previous frame macro block matched with the current macro block data from the background content data of the previous frame based on the motion compensation data of the current macro block, and performing motion compensation processing on the background content data of the previous frame macro block to obtain the background content data of the current macro block.
Wherein the current macroblock background content data belongs to the current frame background content data. And storing the background content data of each macro block corresponding to the background content of the floating window area in the background content data of the previous frame.
Wherein the current macroblock motion compensation data includes a Motion Vector (MV) and a Prediction Residual (PR) of the current macroblock.
And then searching the best matching block (namely the background content data of the previous frame) which is most matched with the current macro block from the background content data of the previous frame according to the MV of the current macro block, and determining the sum of the pixel value of the best matching block and PR of the current macro block as the background content data of the current macro block.
Illustratively, buff (x, y) is the background content data of the previous frame, buff (x, y) is obtained, and then the Motion Vector (MV) and Prediction Residual (PR) of the current macroblock (i.e. the current frame motion compensation data) are taken from the source video encoded stream; and calculating the coordinate position of the best matching block in Buff (x, y) according to the MV, reading the pixel value of the best matching block (namely the background content data of the macro block of the previous frame) from Buff (x, y) according to the coordinate position, and determining the sum of the pixel value of the best matching block and PR as the background content data of the current macro block. For example, the coordinates of the current macroblock in the current frame are (102,205), the MVs of the current macroblock are (2, 5), and the coordinates of the best matching macroblock that best matches the current macroblock are calculated (100, 200). And then reading the pixel values recorded by the Buff (100, 200) from the Buff (x, y), and adding PR of the current macro block to obtain the pixel value finally displayed by the macro block (namely the background content data of the current macro block).
In the embodiment of the application, the motion compensation data between the current frame and the previous frame (or the previous frame and the next frame) image in the original video data are multiplexed, and the motion compensation processing is performed on the background content data (actually obtained by filtering the original video data) of the floating window area, so that the processing efficiency can be improved under the condition that the background content data of the floating window area can be ensured to be matched with the currently played video picture data in real time.
In the embodiment of the application, for the current macro block in the current frame image not being a non-I type macro block, after acquiring target coding data corresponding to a video area and background coding data corresponding to a floating window area from the current frame coding video stream based on the position information, the current macro block motion compensation data can be acquired from the current macro block data; based on the current macro block motion compensation data, the previous frame macro block background content data matched with the current macro block data is determined from the previous frame background content data, motion compensation processing is carried out on the previous frame macro block background content data to obtain current macro block background content data (the current macro block background content data belongs to the current frame background content data), a series of processing operations such as decoding processing, filtering processing, picture layer overlapping processing and the like are not needed to be carried out on original macro block data which are not I-type macro blocks, and the current macro block background content data can be obtained only by carrying out motion compensation processing on the cached previous frame macro block background content data.
In some embodiments of the present application, as shown in fig. 14 in conjunction with fig. 13, after S811, the floating window display method provided in the embodiment of the present application further includes S812 to S813 described below.
And S812, decoding the current macro block data to obtain the current macro block video picture data under the condition that the current macro block data is the I type macro block.
S813, filtering the current macro block video picture data to obtain current macro block background content data.
Wherein the current macroblock background content data belongs to the current frame background content data.
For an I-type macroblock, the background coding data does not include motion compensation data, so motion compensation processing cannot be performed on the buffered previous frame background content data corresponding to the floating window area based on the current frame motion compensation data in general, so that the current frame background content data of the floating window area is obtained, decoding processing can be performed on the I-type macroblock in a non-key frame image, filtering processing can be performed on the I-type macroblock, and a series of processing operations such as layer-by-layer overlapping processing are performed to obtain the current macroblock background content data, so that the current frame background content data including the current macroblock background content data is less in difference with the background content data corresponding to the current frame image, and further the background content data of the floating window area can be matched with the currently played video picture data in real time.
In some embodiments of the present application, as shown in fig. 15 in conjunction with fig. 13, after S811, the floating window display method provided in the embodiment of the present application further includes the following S814 to S815.
S814, if the current macro block data is an I type macro block, acquiring supplementary motion compensation data corresponding to the current macro block from at least one macro block data adjacent to the current macro block data.
Wherein the supplemental motion compensation data belongs to the current frame motion compensation data.
S815, based on the supplementary motion compensation data, determining the background content data of the previous frame macro block matched with the current macro block data from the background content data of the previous frame, and performing motion compensation processing on the background content data of the previous frame macro block to obtain the background content data of the current macro block.
Wherein the current macroblock background content data belongs to the current frame background content data.
In the embodiment of the application, because the proportion of the I type macro block in the non-key frame image is smaller and the proportion of the non-I type macro block is larger, the motion compensation data corresponding to the I type macro block image can be calculated by combining the motion compensation data of the adjacent macro block of the I type macro block in the non-key frame image, the motion compensation processing is carried out on the background content data of the last macro block corresponding to the floating window area, which is cached, so as to obtain the background content data of the current macro block, the background content data of the current frame comprising the background content data of the current macro block is also smaller than the background content data corresponding to the actual current frame image, and the background content data of the floating window area can be matched with the currently played video picture data in real time.
In the embodiment of the application, under the condition that the current macro block data is an I type macro block, whether the current macro block data is firstly decoded and then filtered, and then a series of processing operations such as layer stacking processing and the like are carried out to obtain the background content data of the current macro block, or the motion compensation data corresponding to the I type macro block image is obtained by combining the motion compensation data of the adjacent macro block of the I type macro block in a non-key frame image, the motion compensation processing is carried out on the background content data of the last macro block corresponding to the floating window area in a buffer memory to obtain the background content data of the current macro block, and the background content data of the current frame including the background content data of the current macro block is smaller than the background content data corresponding to the actual current frame image, so that the background content data of the floating window area can be matched with the currently played video picture data in real time.
In some embodiments of the present application, after S807, the method for displaying a floating window according to the embodiment of the present application further includes S816 described below.
S816, storing the background content data of the current frame.
In the embodiment of the application, whether the current frame encoded video stream is a key frame or a non-key frame, the background content data of the current frame needs to be stored so as to acquire the background content data of the next frame corresponding to the next frame encoded video stream based on the background content data of the current frame.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements each process executed by the above-mentioned floating window display method, and can achieve the same technical effects, so that repetition is avoided, and no further description is provided here.
The computer readable storage medium may be a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk.
The present invention provides a computer program product comprising: the computer program product, when run on a computer, causes the computer to implement the floating window display method described above.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. The illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A display device, characterized by comprising:
A display;
A user input interface configured to: receiving user input operation for arousing a floating window under the condition of displaying a video playing interface;
A controller configured to: responding to the user input operation, and displaying a floating window area on the video playing interface;
In the process of playing video on the video playing interface, background content data of the floating window area is matched with currently played video picture data in real time, the transparency of the background content data is smaller than that of the video picture data, and the current frame background content data of the floating window area is obtained by performing motion compensation processing on cached previous frame background content data corresponding to the floating window area based on current frame motion compensation data.
2. The display device of claim 1, wherein the display device comprises a display device,
A controller specifically configured to: responding to the user input operation, and acquiring the position information of the floating window area in the video playing interface and the foreground content data in the floating window area;
Acquiring target coding data corresponding to a video area and background coding data corresponding to the floating window area from a current frame coding video stream based on the position information;
decoding the target coded data to obtain current frame video picture data;
Acquiring current frame motion compensation data from the background coding data;
Performing motion compensation processing on the cached background content data of the previous frame corresponding to the floating window area based on the current frame motion compensation data to obtain the current frame background content data of the floating window area;
and displaying the current frame image comprising the current frame video picture data, the current frame background content data and the foreground content data on a video playing interface.
3. The display device of claim 2, wherein the controller is further configured to:
Before the motion compensation data is obtained from the background coding data, judging whether the current frame coding video stream is a key frame or not;
the controller is specifically configured to:
And acquiring the current frame motion compensation data from the background coding data in the case that the current frame coding video stream is not a key frame.
4. The display device of claim 3, wherein the controller is further configured to:
after judging whether the current frame coded video stream is a key frame, decoding the background coded data to obtain background video picture data based on the current frame video picture data, the current frame background content data and the foreground content data before generating a current frame image under the condition that the current frame coded video stream is a key frame;
and filtering the background video picture data to obtain the background content data of the current frame.
5. The display device according to claim 4, wherein the background encoded data includes first encoded data corresponding to a first region that is a region in the floating window region where the foreground content data is not displayed and second encoded data corresponding to a second region that is a region in the floating window region where the foreground content data is corresponding; the current frame background content data comprises first background content data and second background content data;
the controller is specifically configured to:
decoding the first coded data to obtain first video picture data;
performing first filtering processing on the first video picture data to obtain the first background content data;
decoding the second encoded data to obtain second video picture data;
and performing second filtering processing on the second video picture data to obtain second background content data.
6. A display device as claimed in claim 3, wherein the background encoded data comprises a plurality of macroblock data;
The controller is further configured to:
for each macroblock data, before the motion compensation data is obtained from the background encoded data, if the current frame encoded video stream is not a key frame, determining whether the current macroblock data is an I-type macroblock;
the controller is specifically configured to:
Acquiring current macro block motion compensation data from the current macro block data under the condition that the current macro block data is not an I type macro block, wherein the current macro block motion compensation data belongs to the current frame motion compensation data;
And determining the background content data of the previous frame macro block matched with the current macro block data from the background content data of the previous frame based on the motion compensation data of the current macro block, and performing motion compensation processing on the background content data of the previous frame macro block to obtain the background content data of the current macro block, wherein the background content data of the current macro block belongs to the background content data of the current frame.
7. The display device of claim 6, wherein the controller is further configured to:
after judging whether the current macro block data is an I type macro block or not, decoding the current macro block data under the condition that the current macro block data is the I type macro block to obtain current macro block video picture data;
and filtering the current macro block video picture data to obtain current macro block background content data, wherein the current macro block background content data belongs to the current frame background content data.
8. The display device of claim 6, wherein the controller is further configured to:
after the determining whether the current macroblock data is an I-type macroblock, in case the current macroblock data is an I-type macroblock,
Acquiring supplementary motion compensation data corresponding to a current macro block from at least one macro block data adjacent to the current macro block data, wherein the supplementary motion compensation data belongs to the current frame motion compensation data;
And determining the background content data of the previous frame matched with the current macro block data from the background content data of the previous frame based on the supplementary motion compensation data, and performing motion compensation processing on the background content data of the previous frame to obtain the background content data of the current macro block, wherein the background content data of the current macro block belongs to the background content data of the current frame.
9. The display device of any one of claims 2 to 7, wherein the controller is further configured to:
And storing the background content data of the current frame.
10. A floating window display method, characterized by being applied to a display device, comprising:
Receiving user input operation for arousing a floating window under the condition of displaying a video playing interface;
responding to the user input operation, and displaying a floating window area on the video playing interface;
In the process of playing video on the video playing interface, background content data of the floating window area is matched with currently played video picture data in real time, the transparency of the background content data is smaller than that of the video picture data, and the current frame background content data of the floating window area is obtained by performing motion compensation processing on cached previous frame background content data corresponding to the floating window area based on current frame motion compensation data.
CN202211210716.5A 2022-09-30 2022-09-30 Display device and floating window display method Active CN115695889B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211210716.5A CN115695889B (en) 2022-09-30 2022-09-30 Display device and floating window display method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211210716.5A CN115695889B (en) 2022-09-30 2022-09-30 Display device and floating window display method

Publications (2)

Publication Number Publication Date
CN115695889A CN115695889A (en) 2023-02-03
CN115695889B true CN115695889B (en) 2024-09-06

Family

ID=85064545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211210716.5A Active CN115695889B (en) 2022-09-30 2022-09-30 Display device and floating window display method

Country Status (1)

Country Link
CN (1) CN115695889B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102739983A (en) * 2011-04-11 2012-10-17 腾讯科技(深圳)有限公司 Method and system for implementing translucent effect

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100803611B1 (en) * 2006-11-28 2008-02-15 삼성전자주식회사 Method and apparatus for encoding video, method and apparatus for decoding video
CN101621685B (en) * 2008-07-04 2011-06-15 株式会社日立制作所 Coder and coding method
CN104010223B (en) * 2014-06-17 2016-01-13 合一网络技术(北京)有限公司 Adapter terminal system carries out the method and system of video playback
KR102716780B1 (en) * 2016-09-20 2024-10-11 엘지디스플레이 주식회사 Image processing method and display device using the same
CN107885558A (en) * 2016-09-29 2018-04-06 努比亚技术有限公司 A kind of data for projection processing unit, method and data for projection share equipment
CN109151469B (en) * 2017-06-15 2020-06-30 腾讯科技(深圳)有限公司 Video coding method, device and equipment
CN107507221A (en) * 2017-07-28 2017-12-22 天津大学 With reference to frame difference method and the moving object detection and tracking method of mixed Gauss model
CN113438493B (en) * 2018-05-30 2023-06-13 广州方硅信息技术有限公司 Popup animation generation method, server, system and storage medium
CN112073795B (en) * 2019-06-10 2021-10-01 海信视像科技股份有限公司 Video data processing method and device and display equipment
CN113596590B (en) * 2020-04-30 2022-08-26 聚好看科技股份有限公司 Display device and play control method
CN112399233A (en) * 2019-08-18 2021-02-23 海信视像科技股份有限公司 Display device and position self-adaptive adjusting method of video chat window
CN112423092A (en) * 2019-08-23 2021-02-26 北京小米移动软件有限公司 Video recording method and video recording device
CN113542886B (en) * 2020-04-14 2023-09-22 北京搜狗科技发展有限公司 Video playing method and device for playing video
CN111901615A (en) * 2020-06-28 2020-11-06 北京百度网讯科技有限公司 Live video playing method and device
CN111726676B (en) * 2020-07-03 2021-12-14 腾讯科技(深圳)有限公司 Image generation method, display method, device and equipment based on video
CN112423084B (en) * 2020-11-11 2022-11-01 北京字跳网络技术有限公司 Display method and device of hotspot list, electronic equipment and storage medium
CN112954344A (en) * 2021-01-20 2021-06-11 西安万像电子科技有限公司 Encoding and decoding method, device and system
CN114866860B (en) * 2021-01-20 2023-07-11 华为技术有限公司 Video playing method and electronic equipment
CN112929687B (en) * 2021-02-05 2023-12-29 腾竞体育文化发展(上海)有限公司 Live video-based interaction method, device, equipment and storage medium
CN112954459A (en) * 2021-03-04 2021-06-11 网易(杭州)网络有限公司 Video data processing method and device
US20220109838A1 (en) * 2021-12-17 2022-04-07 Intel Corporation Methods and apparatus to process video frame pixel data using artificial intelligence video frame segmentation

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102739983A (en) * 2011-04-11 2012-10-17 腾讯科技(深圳)有限公司 Method and system for implementing translucent effect

Also Published As

Publication number Publication date
CN115695889A (en) 2023-02-03

Similar Documents

Publication Publication Date Title
US20220014819A1 (en) Video image processing
US9756328B2 (en) System, terminal, and method for dynamically adjusting video
US20140092439A1 (en) Encoding images using a 3d mesh of polygons and corresponding textures
US10257510B2 (en) Media encoding using changed regions
JP2022092010A (en) Dmvr and bdof based inter-prediction method and device
US20050021810A1 (en) Remote display protocol, video display system, and terminal equipment
WO2021036795A1 (en) Video super-resolution processing method and device
CN103888840B (en) A kind of video mobile terminal Real Time Dragging and the method and device for scaling
CN112788235A (en) Image processing method, image processing device, terminal equipment and computer readable storage medium
US20240364915A1 (en) Image encoding/decoding method and device on basis of wrap-around motion compensation, and recording medium storing bitstream
US9226003B2 (en) Method for transmitting video signals from an application on a server over an IP network to a client device
CN111556350B (en) Intelligent terminal and man-machine interaction method
US9729931B2 (en) System for managing detection of advertisements in an electronic device, for example in a digital TV decoder
CN115361582B (en) Video real-time super-resolution processing method, device, terminal and storage medium
US12010157B2 (en) Systems and methods for enabling user-controlled extended reality
US20220172440A1 (en) Extended field of view generation for split-rendering for virtual reality streaming
US20240297953A1 (en) Systems and methods for enabling user-controlled extended reality
CN115695889B (en) Display device and floating window display method
CN113453069B (en) Display device and thumbnail generation method
CN113794887A (en) Method and related equipment for video coding in game engine
CN112612435A (en) Information processing method, device, equipment and storage medium
US10484714B2 (en) Codec for multi-camera compression
CN116886912B (en) Multipath video coding method, device, equipment and storage medium
CN118828106A (en) Display equipment and video switching method
CN116996629A (en) Multi-video equipment acquisition display method and system thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant