CN116866658A - Video data processing method, device, equipment and medium - Google Patents

Video data processing method, device, equipment and medium Download PDF

Info

Publication number
CN116866658A
CN116866658A CN202310835048.3A CN202310835048A CN116866658A CN 116866658 A CN116866658 A CN 116866658A CN 202310835048 A CN202310835048 A CN 202310835048A CN 116866658 A CN116866658 A CN 116866658A
Authority
CN
China
Prior art keywords
target
rendering
current
texture data
display device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310835048.3A
Other languages
Chinese (zh)
Inventor
胡建伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Connectivity Beijing Technology Co Ltd
Original Assignee
Apollo Intelligent Connectivity Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Intelligent Connectivity Beijing Technology Co Ltd filed Critical Apollo Intelligent Connectivity Beijing Technology Co Ltd
Priority to CN202310835048.3A priority Critical patent/CN116866658A/en
Publication of CN116866658A publication Critical patent/CN116866658A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41422Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance located in transportation means, e.g. personal vehicle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The disclosure provides a method, a device, equipment and a medium for processing video data, and particularly relates to the technical fields of video streaming, video screen projection, video transmission, auxiliary driving, intelligent cabins, cloud computing and the like. The specific implementation scheme is as follows: acquiring target texture data corresponding to display content of the first display device, and determining a current rendering time interval according to the current time and the historical rendering time; rendering the target texture data according to the expected rendering time interval and the current rendering time interval to obtain a target video frame; wherein the desired rendering time interval is determined from a desired frame rate of the second display device; and carrying out video coding on the target video frame to obtain a target coding frame, and sending the target coding frame to the second display equipment for the second display equipment to decode the target coding frame. The method and the device can enable the display content sharing with constant frame rate between the display devices, alleviate the problems of delay and blocking, and improve the fluency of the display content sharing.

Description

Video data processing method, device, equipment and medium
Technical Field
The disclosure relates to the technical field of computers, in particular to the technical fields of video streaming, video screen projection, video transmission, auxiliary driving, intelligent cabins, cloud computing and the like, and particularly relates to a video data processing method, device, equipment and medium.
Background
With the development of video technology and smart devices, users often need to share display content between different devices. However, sharing display content between different devices may cause problems such as delay and jamming due to different data processing capabilities of the different devices.
For example, when a central control display device in a vehicle shares display content with an instrument display device, the data processing performance of the instrument display device is far weaker than that of the central control display device in a normal case, so that problems such as delay and jamming may occur in a display interface of the instrument display device.
Disclosure of Invention
The present disclosure provides a method, apparatus, device, and medium for processing video data that mitigates delay and stuck problems when sharing display content between different devices.
According to an aspect of the present disclosure, there is provided a method of processing video data, including:
acquiring target texture data corresponding to display content of the first display device, and determining a current rendering time interval according to the current time and the historical rendering time;
rendering the target texture data according to the expected rendering time interval and the current rendering time interval to obtain a target video frame; wherein the desired rendering time interval is determined from a desired frame rate of the second display device;
And carrying out video coding on the target video frame to obtain a target coding frame, and sending the target coding frame to the second display equipment for the second display equipment to decode the target coding frame.
According to another aspect of the present disclosure, there is provided a processing apparatus of video data, including:
the texture data acquisition module is used for acquiring target texture data corresponding to the display content of the first display device and determining a current rendering time interval according to the current time and the historical rendering time;
the texture rendering module is used for rendering the target texture data according to the expected rendering time interval and the current rendering time interval to obtain a target video frame; wherein the desired rendering time interval is determined from a desired frame rate of the second display device;
and the video coding module is used for carrying out video coding on the target video frame to obtain a target coding frame, and sending the target coding frame to the second display equipment for the second display equipment to decode the target coding frame.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of any one of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, performs the method of any of the present disclosure.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1A is a schematic diagram of an interface for some display content sharing disclosed in accordance with an embodiment of the present disclosure;
FIG. 1B is a schematic diagram of some existing dynamic frame rate policies disclosed in accordance with an embodiment of the disclosure;
FIG. 1C is a flow chart of some video data processing methods disclosed in accordance with embodiments of the present disclosure;
FIG. 2 is a flow chart of a method of processing other video data disclosed in accordance with an embodiment of the present disclosure;
FIG. 3 is a comparative schematic diagram of some video data processing flows disclosed in accordance with an embodiment of the present disclosure;
FIG. 4 is a timing diagram of some video data processing disclosed in accordance with an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of some video data processing apparatuses disclosed according to an embodiment of the present disclosure;
fig. 6 is a block diagram of an electronic device for implementing the video data processing method disclosed in the embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
With the development of automobile intellectualization, more and more automobiles are equipped with intelligent display devices for displaying personalized information, such as navigation information or vehicle status information, etc., to users. In addition, since some automobiles are equipped with more than one intelligent display device, in order to facilitate the user to view information, display content sharing is performed between the display devices. Among these, the most common scenario is that the central control display device of the automobile performs display content sharing to the meter display device. Fig. 1A is a schematic diagram of interfaces for sharing display contents according to some embodiments of the present disclosure, as shown in fig. 1A, 10 indicates display contents of a central control display device, display contents 10 are navigation interfaces, 11 indicates display contents of an instrument display device, display contents 11 are navigation interfaces identical to display contents 10, and sharing of the navigation interfaces is performed between the central control display device and the instrument display device.
In order to balance the flow loss of the video stream in network transmission, the current display content sharing mostly adopts a dynamic frame rate strategy, taking the example that the central control display equipment of an automobile performs display content sharing to the instrument display equipment, when the display content of the central control display equipment does not have a violent moving picture, the central control display equipment reduces the video coding frame rate, namely reduces the number of video coding frames transmitted to the instrument display equipment in unit time; when the display content of the central control display device has a violent motion picture, the central control display device increases the video coding frame rate, namely the number of video coding frames transmitted to the instrument display device in unit time. Fig. 1B is a schematic diagram of some existing dynamic frame rate strategies disclosed in accordance with an embodiment of the present disclosure, as shown in fig. 1B, with the abscissa representing time and the ordinate representing video encoding frame rate of a display device, it can be seen that over time there is a fluctuation in video encoding frame rate of the display device, sometimes the video encoding frame rate is lower, and sometimes the video encoding frame rate is higher.
However, in general, the instrument display device is loaded with the real-time embedded system based on the ARM 64, and the decoding capability of the system is much weaker than that of the central control display device, so that when the video coding frame rate of the central control display device is higher under the current dynamic frame rate strategy, a larger decoding pressure is definitely brought to the instrument display device, and thus, the problems of delay, blocking and the like of display content sharing are caused.
To optimize the above problem, the prior art generally adopts a manner of reducing the number of key frames, which is to actively trigger the video encoder to output the key frames through a period, and the video encoder does not output the key frames according to a given strategy. However, this approach is still inherently a dynamic frame rate strategy, and some video decoders cannot decode, have poor compatibility, and cannot be widely applied and popularized.
Fig. 1C is a flowchart of some video data processing methods disclosed in embodiments of the present disclosure, which may be applicable to a case where display content sharing is performed between display devices. The method of the embodiment can be executed by the processing device of the video data disclosed by the embodiment of the disclosure, and the device can be realized by software and/or hardware and can be integrated on any electronic equipment with computing capability, for example, integrated in a central control display device of an automobile.
As shown in fig. 1C, the method for processing video data disclosed in the present embodiment may include:
s101, acquiring target texture data corresponding to display content of a first display device, and determining a current rendering time interval according to a current time and a historical rendering time.
The first display device represents a terminal device for sharing display content to other display devices, for example, the first display device can be a smart phone or a smart tablet, can also be a central control display device of an automobile, and can also be other smart terminals with video coding, video transmission and display capabilities. The target texture data is texture data obtained by performing texture drawing according to view data corresponding to display content, and refers to a visual texture effect presented in the display content, and the visual texture effect consists of regular changes of pixel colors and brightness.
The historical rendering time represents a time corresponding to when the first display device performs a rendering operation on the target texture data in the historical time. Preferably, the time corresponding to the last time the rendering operation was performed on the target texture data is selected as the historical rendering time. For example, assume that the time corresponding to the first display device when it last performed a rendering operation on target texture data is "5-point 01 minutes 10 seconds", then "history rendering time" is "5-point 01 minutes 10 seconds".
The current rendering time interval represents a time difference between the current time and the historical rendering time, for example, assuming that the current time is "5 point 01 minute 12 seconds" and the "historical rendering time" is "5 point 01 minute 10 seconds", the current rendering time interval is a time difference between "5 point 01 minute 12 seconds" and "5 point 01 minute 10 seconds", that is, "2 seconds".
In one embodiment, obtaining target texture data corresponding to display content of a first display device includes:
the first canvas is created by the first display device, and virtual display devices corresponding to the first display device are created after the creation is successful. The first canvas is essentially a drawing buffer area and can be used for performing operations such as texture drawing and rendering. The virtual display device is a virtual screen and is used for video recording of the display content, namely, view data corresponding to the display content is obtained.
The first display device obtains view data corresponding to display content through the virtual display device, and performs texture drawing on the view data in the first canvas to generate target texture data.
Optionally, when the operating system carried by the first display device is an android system, the embodiment may further include:
the first display device constructs an instance Surface texture and takes a first Surface contained in the Surface texture as a first canvas. After the construction is successful, virtual display equipment corresponding to the first display equipment is created, after the display service of the first display equipment monitors a virtual display creation event, the Surface texture is notified to acquire view data corresponding to display content from the virtual display, and the view data is subjected to texture drawing on the first Surface to generate target texture data.
In one embodiment, determining the current rendering time interval from the current time and the historical rendering time includes:
the first display device determines the current moment through a timer and the historical rendering moment through a log file; and calculating the current rendering time interval according to the current time and the historical rendering time.
The effect of acquiring the texture data corresponding to the display content of the first display device is achieved by acquiring the target texture data corresponding to the display content, and a data foundation is laid for the subsequent rendering of the video frame based on the texture data; the current rendering time interval is determined according to the current time and the historical rendering time, so that the effect of rendering timing is achieved, and a data foundation is laid for determining whether to trigger rendering operation or not based on the current rendering time interval.
And S102, rendering the target texture data according to the expected rendering time interval and the current rendering time interval to obtain a target video frame.
Wherein the desired rendering time interval represents a fixed time interval set when the first display device continuously performs a rendering operation on the target texture data. The desired rendering time interval is determined according to a desired frame rate of the second display device. The second display device represents a terminal device for sharing display content by the first display device, for example, the second display device can be a smart phone or a smart tablet, can also be an instrument display device of an automobile, and can also be other smart terminals with video decoding, video transmission and display capabilities.
The desired frame rate represents a constant frame rate at which the second display device wants to play the display content while the display content is shared. For example, a constant 60FPS plays the display content, or a constant 30FPS plays the display content, or the like. The desired frame rate may be set according to the decoding capability of the second display device, it being understood that when the decoding capability of the second display device is stronger, the desired frame rate is correspondingly higher, and when the decoding capability of the second display device is stronger, the desired frame rate is correspondingly lower.
The desired frame rate may be stored locally on the first display device in advance or may be obtained from the second display device in real time by the first display device. The first display device may directly calculate the desired rendering time interval after obtaining the desired frame rate, for example, assuming that the desired frame rate of the second display device is 30FPS, it indicates that the second display device plays 30 frames of video frames within 1 second, and correspondingly, the first display device needs to render 30 frames of video frames for the second display device to play within 1 second, that is, the desired rendering time interval is 1 second/30=33.3 milliseconds.
The target video frame represents an image frame obtained by rendering the target texture data.
In one embodiment, a first display device obtains a desired rendering interval and compares the desired rendering interval to a current rendering interval. If the current rendering time interval is smaller than the expected rendering time interval, namely, the current rendering time interval does not reach the expected rendering time interval yet, the first display device does not execute the operation of rendering the target texture data; and if the current rendering time interval is equal to the expected rendering time interval, namely, the current rendering time interval reaches the expected rendering time interval, the first display device executes the operation of rendering the target texture data. When the rendering operation is completed, the current rendering time interval is recalculated, and the comparison with the expected rendering time interval is continued, and so on.
Optionally, when the operating system carried by the first display device is an android system, the embodiment may further include:
the first display device compares the expected rendering time interval and the current rendering time interval in real time through the Surface texture, and when the current rendering time interval is determined to be equal to the expected rendering time interval, a notification is sent to an open graphics library OpenGL, and further target texture data is rendered on a second Surface through the OpenGL to generate a target video frame.
In another embodiment, the first display device obtains a desired rendering time interval, and calculates an actual rendering time interval from the desired rendering time interval and the time error. Wherein the time error represents an error value of the rendering operation existing in execution time, which can be set empirically, the time error can be positive or negative, and the time error is positive, which means that the rendering operation is to lag behind the expected rendering time interval; when the time error is negative, it indicates that the rendering operation is to be advanced to the desired rendering time interval.
For example, assuming that the desired rendering time interval is 10 milliseconds and the time error is 1 millisecond, the actual rendering time interval is 10+1=11 milliseconds; for another example, assuming that the desired rendering time interval is 10 milliseconds and the time error is-1 millisecond, the actual rendering time interval is 10-1=9 milliseconds.
If the current rendering time interval is smaller than the actual rendering time interval, that is, the current rendering time interval does not reach the actual rendering time interval yet, the first display device does not execute the operation of rendering the target texture data; and if the current rendering time interval is equal to the actual rendering time interval, namely, the current rendering time interval reaches the actual rendering time interval, the first display device executes the operation of rendering the target texture data.
In the dynamic frame rate strategy in the prior art, the first display device immediately renders the target texture data every time the target texture data is acquired, and it can be understood that when the display content has a severe moving picture, the target texture data of a plurality of frames must be generated, and the first display device correspondingly renders the video frames of the plurality of frames. For the second display device, it must acquire a plurality of encoded frames sent by the first display device in a short time for decoding and playing, which can bring decoding pressure to the second display device, and delay and jamming problems are easy to generate. According to the method and the device, the target texture data are rendered according to the expected rendering time interval and the current rendering time interval to obtain the target video frame, so that judgment of the rendering time interval is added in the rendering stage, the problem that delay and blocking are caused by too high decoding pressure of the second display device due to the fact that the first display device renders the target texture data too frequently is avoided, and the second display device plays display content at a constant frame rate is guaranteed.
S103, video encoding is carried out on the target video frame to obtain a target encoded frame, and the target encoded frame is sent to the second display device for decoding of the target encoded frame by the second display device.
The target coding frame is a video frame obtained by video coding the target video frame. The format of video encoding in this embodiment may be the h.264 format.
In one embodiment, the first display device performs video encoding on the target video frame through a video encoder to obtain a target encoded frame. The target encoded frame is transmitted to the second display device over a communication connection between the first display device and the second display device. The type of communication connection between the first display device and the second display device may be internet, ethernet, or the like.
The second display device receives the target coding frame sent by the first display device, decodes the target coding frame through the video decoder, and further plays the decoded target video frame.
Optionally, when the first display device is a central control display device and the second display device is an instrument display device, the communication connection between the first display device and the second display device is established by the following manner:
When the vehicle is ignited and electrified, the first display device sends a communication connection request to a specific IP and port of the second display device to handshake, and the second display device responds to the communication connection request to establish Ethernet connection with the first display device.
The target video frame is obtained by video coding, so that the efficiency and quality of video transmission can be improved; and the target coding frame is sent to the second display equipment for the second display equipment to decode the target coding frame, so that the sharing of display contents between the first display equipment and the second display equipment is realized, and the specific requirements of users are met.
The method comprises the steps of obtaining target texture data corresponding to display content of a first display device, determining a current rendering time interval according to a current time and a historical rendering time, and rendering the target texture data according to an expected rendering time interval and the current rendering time interval to obtain a target video frame; the method comprises the steps that an expected rendering time interval is determined according to an expected frame rate of a second display device, video encoding is carried out on a target video frame to obtain a target encoded frame, the target encoded frame is sent to the second display device for decoding of the target encoded frame by the second display device, and due to the fact that the first display device is added with the judgment of the rendering time interval in a rendering stage, the problems of delay and blocking caused by too high decoding pressure of the second display device due to the fact that the first display device renders target texture data are avoided, the fact that the second display device plays display content at a constant frame rate is guaranteed, and smoothness of sharing of the display content is improved; and moreover, the video decoder of the second display device has no special requirement, has good compatibility and is beneficial to wide-range application and popularization.
Fig. 2 is a flowchart of another method for processing video data according to an embodiment of the present disclosure, which is further optimized and expanded based on the above technical solution, and may be combined with the above various alternative embodiments.
As shown in fig. 2, the method for processing video data disclosed in this embodiment may include:
s201, creating a virtual display device corresponding to the first display device, and acquiring current view data corresponding to the current display content through the virtual display device.
The current display content is the display content of the first display device at the current moment. Accordingly, the current view data represents view data corresponding to the current display content. Wherein the view data is a data representation of the display content corresponding to the two-dimensional image data.
In one embodiment, the first display device creates a virtual display device corresponding to the first display device, and obtains current view data corresponding to the current display content from the virtual display device.
Optionally, when the operating system carried by the first display device is an android system, the embodiment may further include:
the first display device constructs an instance Surface texture and takes a first Surface contained in the Surface texture as a first canvas. After the construction is successful, virtual display equipment corresponding to the first display equipment is created, and after the display service of the first display equipment monitors a virtual display equipment creation event, the surface text is notified to acquire current view data corresponding to current display content from the virtual display equipment.
S202, if the current view data is different from the historical view data corresponding to the historical display content, performing texture drawing on the current view data in the first canvas to generate current texture data, and taking the current texture data as target texture data.
The first canvas is essentially a drawing buffer area and can be used for performing operations such as texture drawing and rendering. The historical display content is the display content of the first display device at the last historical moment. Accordingly, the history view data represents view data corresponding to the history display content. The last historical time can be set and adjusted according to the actual service requirement, for example, the last historical time can be set before 10 milliseconds.
In one embodiment, the first display device performs data comparison on the current view data and the historical view data, if the current view data is different from the historical view data, the historical display content is changed compared with the current display content, so that the current view data needs to be shared to the second display device, and the first display device performs texture drawing on the current view data in the first canvas to generate the current texture data as target texture data. The target texture data waits to be rendered as a target video frame.
Optionally, when the operating system carried by the first display device is an android system, the embodiment may further include:
the first display device constructs an instance Surface texture and takes a first Surface contained in the Surface texture as a first canvas. After the construction is successful, virtual display equipment corresponding to the first display equipment is created, and after the display service of the first display equipment monitors a virtual display equipment creation event, the surface text is notified to acquire current view data corresponding to current display content from the virtual display equipment. The Surface texture further acquires historical view data, and if the current view data is different from the historical view data, the Surface texture performs texture drawing on the current view data on the first Surface to generate current texture data serving as target texture data.
Creating a virtual display device corresponding to the first display device, and acquiring current view data corresponding to the current display content through the virtual display device; if the current view data is different from the historical view data corresponding to the historical display content, performing texture drawing on the current view data in the first canvas to generate current texture data, and taking the current texture data as target texture data, so that the current texture data is rendered into the texture data only when the view data changes, repeated rendering of the same view data is avoided, rendering pressure of a rendering process is reduced, and rendering efficiency of the rendering process is improved.
Optionally, obtaining target texture data corresponding to the display content of the first display device further includes:
when the expected rendering time interval is the same as the current rendering time interval and no unrendered current texture data exists, the historical texture data generated by last texture drawing is taken as target texture data.
The expected rendering time interval is the same as the current rendering time interval, which indicates that the first display device needs to trigger the operation of rendering the target texture data, and the absence of unrendered current texture data indicates that the first display device does not have current texture data available for rendering. Historical texture data generated by the last texture rendering represents current texture data generated the last time the first display device performed a texture rendering operation.
In one embodiment, when the expected rendering time interval is the same as the current rendering time interval and there is no unrendered current texture data, the first display device acquires the historical texture data generated by last texture drawing from the texture database and uses the historical texture data as target texture data for rendering.
In other words, when the expected rendering time interval is the same as the current rendering time interval, if unrendered current texture data exists, the first display device directly renders the unrendered current texture data; if the unrendered current texture data does not exist, the first display device directly renders the historical texture data.
By taking the historical texture data generated by last texture drawing as target texture data when the expected rendering time interval is the same as the current rendering time interval and unrendered current texture data does not exist, each rendering operation of the first display device can be smoothly executed, interruption of the rendering operation in the first display device is avoided, video playing with a constant frame rate cannot be carried out by the second display device, and smoothness of sharing of display content is further guaranteed.
And S203, determining a current rendering time interval according to the current time and the historical rendering time.
S204, comparing the expected rendering time interval with the current rendering time interval, and rendering the target texture data according to the comparison result to obtain a target video frame.
In one embodiment, the first display device compares the desired rendering time interval with the current rendering time interval, and determines whether to trigger an operation of rendering the target texture data according to a size relationship between the desired rendering time interval and the current rendering time interval.
The expected rendering time interval is compared with the current rendering time interval, and the target texture data is rendered according to the comparison result to obtain a target video frame, so that the judgment of the rendering time interval is added in the rendering stage, and the problems of delay and blocking caused by excessive decoding pressure of the second display device due to the fact that the first display device renders the target texture data too frequently are avoided.
Optionally, comparing the expected rendering time interval with the current rendering time interval, and rendering the target texture data according to the comparison result to obtain a target video frame, including:
and rendering the target texture data in the case that the expected rendering time interval is the same as the current rendering time interval.
In one embodiment, the first display device compares the expected rendering time interval with the current rendering time interval, and if the expected rendering time interval is the same as the current rendering time interval, the first display device triggers an operation of rendering the target texture data to obtain the target video frame.
By rendering the target texture data in the case that the desired rendering time interval is the same as the current rendering time interval, on the one hand, the second display device can be guaranteed to play video with a constant frame rate, and on the other hand, the constant frame rate can be guaranteed to be just the desired frame rate of the second display device.
Optionally, in a case where the desired rendering time interval is the same as the current rendering time interval, rendering the target texture data includes:
and under the condition that the expected rendering time interval is the same as the current rendering time interval and the target texture data is the current texture data, rendering the target texture data according to the texture drawing sequence of the target texture data.
When the display content of the first display device has a severe motion picture, multiple frames of current texture data are generated with time, and the multiple frames of current texture data have a texture drawing sequence for representing the sequence of the current texture data of each frame in texture drawing. It will be appreciated that in order to ensure accuracy of the sequence of rendered video frames, the current texture data preceding the texture rendering sequence may perform rendering earlier than the current texture data following the texture rendering sequence.
In one embodiment, when the expected rendering time interval is the same as the current rendering time interval and the target texture data is the current texture data, the first display device determines a texture drawing order of the current texture data of each frame, and selects a frame of the current texture data with the forefront texture drawing order, that is, the earliest texture drawing, to render.
Under the condition that the expected rendering time interval is the same as the current rendering time interval and the target texture data is the current texture data, the target texture data is rendered according to the texture drawing sequence of the target texture data, so that the accuracy of the sequence of the video frames after rendering is ensured, and the problem that the frame skipping or the frame losing of the video occurs when the display content is shared and the like is avoided, so that the user's look and feel is influenced.
Optionally, rendering the target texture data includes:
obtaining target texture data from a first canvas, and rendering the target texture data in a second canvas; wherein the first canvas is different from the second canvas.
The first canvas and the second canvas are basically a drawing buffer area and can be used for performing operations such as texture drawing, rendering and the like.
In one embodiment, the first display device obtains target texture data from a first canvas and inputs the target texture data into a second canvas for performing a rendering operation on the target texture data in the second canvas, rendering resulting in a target video frame.
Optionally, when the operating system carried by the first display device is an android system, the embodiment may further include:
the first display device builds an instance Surface texture and a custom class EncoderSurface, and further takes a first Surface contained in the Surface texture as a first canvas and an EncoderSurface as a second canvas. The first display device compares the expected rendering time interval and the current rendering time interval in real time through Surface texture, and when the current rendering time interval is determined to be equal to the expected rendering time interval, a notification is sent to an open graphics library OpenGL, further target texture data is obtained from the first Surface through the OpenGL, and rendering is carried out in the EncoderSurface to obtain a target video frame.
By acquiring the target texture data from the first canvas and rendering the target texture data in the second canvas, the separate execution of texture drawing operation and rendering operation is realized, and the execution efficiency of the whole flow is improved.
S205, performing video coding on the target video frame to obtain a target coding frame, and sending the target coding frame to the second display device for the second display device to decode the target coding frame.
In one embodiment, the first display device inputs the target video frame into a video encoder, video encodes the target video frame by the video encoder, and outputs the target encoded frame. And then, based on the communication connection with the second display device, the target coding frame is sent to the second display device for the second display device to decode the target coding frame, and then the decoded target video frame is played.
Optionally, before rendering the target texture data according to the desired rendering time interval and the current rendering time interval, the method further includes:
and acquiring expected coding parameters from the second display equipment, and initializing the original video encoder according to the expected coding parameters to obtain the standard video encoder.
Wherein the desired encoding parameters embody what encoding parameters the second display device expects the first display device to encode video in accordance with. The original video encoder represents a video encoder set with default encoding parameters; a standard video encoder represents a video encoder that is set with desired encoding parameters.
In one embodiment, the second display device transmits the desired encoding parameters to the first display device based on a communication connection with the first display device. The first display device modifies the coding parameters of the original video coder according to the expected coding parameters, and is used for initializing the original video coder to obtain a standard video coder.
Alternatively, when the operating system on which the first display device is installed is an android system, the original video encoder may be MediaCodec.
Video encoding is carried out on the target video frame to obtain a target encoded frame, which comprises the following steps:
inputting the target video frame to a standard video encoder; and video encoding the target video frame by a standard video encoder, and outputting the target encoded frame.
In one embodiment, the first display device inputs the target video frame into a standard video encoder, video encodes the target video frame by the standard video encoder, and outputs the target encoded frame.
The target video frame is subjected to video coding through the standard video encoder, and the target coding frame is output, so that the first display device performs video coding according to the coding parameters expected by the second display device, the target coding frame meets the requirements of the second display device, and display content sharing can be smoothly performed.
Optionally, the desired encoding parameters include at least one of a desired frame rate, a desired bit rate, a desired resolution, and a desired frame size.
The parameter content of the expected coding parameter is enriched by setting the expected coding parameter to comprise at least one of the expected frame rate, the expected bit rate, the expected resolution and the expected frame size, so that the target coding frame coded by the standard video coder can more meet the coding requirement of the second display device.
Optionally, inputting the target video frame to a standard video encoder includes:
inputting the target video frame into an input buffer area corresponding to the standard video encoder; the target video frames are input to a standard video encoder according to the input order of the target video frames in the input buffer.
The input buffer is used for buffering target video frames to be input to the video encoder. For example, the input buffer may be an InputBuffer. The input sequence of the input buffer zone shows the sequence of inputting each target video frame into the input buffer zone, and it can be understood that the target video frame with the front input sequence can execute video coding earlier than the target video frame with the rear input sequence.
In one embodiment, the first display device inputs the target video frames into the input buffer, determines the input sequence of each target video frame in the input buffer, selects a frame of target video frame with the forefront input sequence, i.e. the earliest input frame into the input buffer, and inputs the frame of target video frame into the standard video encoder for video encoding.
The target video frames are input into the input buffer area corresponding to the standard video encoder, and the target video frames are input into the standard video encoder according to the input sequence of the target video frames in the input buffer area, so that the video encoding operation for the target video frames can be performed sequentially, and the problem that when the display content is shared in the second display device, the user's look and feel are influenced due to the occurrence of frame skip or frame loss of the video and the like is avoided.
Optionally, sending the target encoded frame to the second display device includes:
inputting the target coding frame into an output buffer area corresponding to the standard video coder; and transmitting the target coding frame to the second display device according to the input sequence of the target coding frame in the output buffer area.
The output buffer is used for buffering the target coding frame output by the video coder. For example, the output buffer may be an OutputBuffer. The input sequence of the output buffer zone reflects the sequence of the input of each target coding frame to the output buffer zone, and it can be understood that the target coding frame with the front input sequence is sent to the second display device earlier than the target coding frame with the rear input sequence.
In one embodiment, the first display device inputs the target encoded frames into the output buffer, determines the input sequence of each target encoded frame in the output buffer, selects a frame of target encoded frame with the forefront input sequence, i.e. the earliest input frame into the output buffer, and sends the frame of target encoded frame to the second display device for decoding.
The target coding frame is input into the output buffer area corresponding to the standard video encoder, and is sent to the second display device according to the input sequence of the target coding frame in the output buffer area, so that the video transmission operation of the target coding frame can be sequentially executed, and the problem that when the display content is shared in the second display device, the user's look and feel are influenced due to the occurrence of video frame skipping or frame loss and the like is avoided.
Fig. 3 is a schematic diagram of a comparison of some video data processing flows disclosed in an embodiment of the present disclosure, for comparing a prior art dynamic frame rate implementation display content sharing with a constant frame rate implementation display content sharing processing flow of the present disclosure.
As shown in fig. 3, taking the first display device as a central control display device, the second display device as an instrument display device, and the operating system of the first display device as an android system as an example, the process flow of implementing display content sharing by using a dynamic frame rate in the prior art may be summarized as follows:
the central control display device creates a corresponding virtual display device virtual display and the video encoder MediaCodec provides canvas surface. And obtaining view data of the display content through the visual display, carrying out texture drawing operation and rendering operation in the canvas surface in real time, inputting video frames obtained in the canvas surface to the MediaCodec for video coding, and finally sending the coded frames to the instrument display device through the vehicle service.
It can be seen that the dynamic frame rate in the prior art realizes display content sharing, and in canvas surface provided by MediaCodec, texture drawing operation and rendering operation are performed in real time, when the display content has a severe moving picture, a large number of video frames are necessarily generated in a short time, so that the video frame rate is suddenly increased, and a larger decoding pressure is brought to instrument display equipment, so that problems of delay, clamping and the like occur.
The process flow for realizing display content sharing by the constant frame rate of the present disclosure can be summarized as follows:
the central control display device constructs an instance Surface texture and a custom class EncoderSurface, and further takes a first Surface contained in the Surface texture as a first canvas and takes the EncoderSurface as a second canvas. After the construction is successful, creating a virtual display device corresponding to the central control display device, acquiring view data corresponding to display content from the virtual display device through Surface texture, and carrying out texture drawing on the view data on a first Surface to generate target texture data.
And the central control display equipment compares the expected rendering time interval with the current rendering time interval in real time through the Surface texture, and when the current rendering time interval is determined to be equal to the expected rendering time interval, a notification is sent to an open graphics library OpenGL, further, target texture data is obtained from a first Surface through the OpenGL, and rendering is carried out in the EncoderSurface to obtain a target video frame. And then inputting the target video frame into the MediaCodec for video coding, and finally transmitting the target coding frame to the instrument display device through the vehicle service CarService.
It can be seen that the constant frame rate implementation of the present disclosure enables display content sharing, where the texture rendering operation is performed in real time in the first Surface of the Surface texture, but only when the current rendering interval is equal to the desired rendering interval, the target texture data is obtained from the first Surface, and rendering is performed in the EncoderSurface to obtain the target video frame. Therefore, even if the display content has a severe motion picture, the rendering operation is periodically performed, so that video frames are periodically generated, and further, the video frames are periodically encoded to obtain encoded frames, the second display device is ensured to play the display content at a constant frame rate, and the smoothness of sharing the display content is improved.
Fig. 4 is a timing diagram of some video data processing disclosed in an embodiment of the disclosure, and as shown in fig. 4, taking an operating system of a first display device as an android system as an example:
initializing according to the expected coding parameters to obtain the standard video coder MediaCodec.
An EncoderSurface is created and a SurfacTexture is created. An instruction is sent to DisplayManger through SurfaceTexture for creating a visual display through DisplayManger.
The DisplayManger transmits a create event notification to DisplayService for acquiring target view data from the virtual display through DisplayService and transmitting to the surface texture.
Surface texture rendering the target view data generates target texture data. And (3) judging the interval time by the surface texture, namely comparing the expected rendering time interval with the current rendering time interval in real time, and when the current rendering time interval is determined to be equal to the expected rendering time interval, sending the target texture data to the EncoderSurface, and rendering through the EncoderSurface to obtain the target video frame.
The EncoderSurface sends the target video frame to the MediaCodec for video encoding the target video frame through the MediaCodec to obtain a target encoded frame.
Fig. 5 is a schematic structural diagram of some video data processing apparatuses according to an embodiment of the present disclosure, which may be suitable for a case where display content sharing is performed between display devices. The device of the embodiment can be implemented by software and/or hardware, and can be integrated on any electronic equipment with computing capability.
As shown in fig. 5, the processing apparatus 50 for video data disclosed in this embodiment may include a texture data obtaining module 51, a texture rendering module 52, and a video encoding module 53, where:
the texture data obtaining module 51 is configured to obtain target texture data corresponding to display content of the first display device, and determine a current rendering time interval according to a current time and a historical rendering time;
the texture rendering module 52 is configured to render the target texture data according to a desired rendering time interval and the current rendering time interval, so as to obtain a target video frame; wherein the desired rendering time interval is determined from a desired frame rate of the second display device;
the video encoding module 53 is configured to perform video encoding on the target video frame to obtain a target encoded frame, and send the target encoded frame to the second display device, so that the second display device decodes the target encoded frame.
Optionally, the texture data obtaining module 51 is specifically configured to:
creating a virtual display device corresponding to the first display device, and acquiring current view data corresponding to current display content through the virtual display device;
if the current view data is different from the historical view data corresponding to the historical display content, carrying out texture drawing on the current view data in a first canvas to generate current texture data, and taking the current texture data as the target texture data;
the current display content is the display content of the first display device at the current moment, and the historical display content is the display content of the first display device at the last historical moment.
Optionally, the texture data obtaining module 51 is specifically configured to:
and taking historical texture data generated by last texture drawing as the target texture data when the expected rendering time interval is the same as the current rendering time interval and the current texture data which is not rendered does not exist.
Optionally, the texture rendering module 52 is specifically configured to:
and comparing the expected rendering time interval with the current rendering time interval, and rendering the target texture data according to a comparison result.
Optionally, the texture rendering module 52 is specifically further configured to:
and rendering the target texture data under the condition that the expected rendering time interval is the same as the current rendering time interval.
Optionally, the texture rendering module 52 is specifically further configured to:
and under the condition that the expected rendering time interval is the same as the current rendering time interval and the target texture data is the current texture data, rendering the target texture data according to the texture drawing sequence of the target texture data.
Optionally, the texture rendering module 52 is specifically further configured to:
acquiring the target texture data from the first canvas, and rendering the target texture data in a second canvas; wherein the first canvas is different from the second canvas.
Optionally, the apparatus further includes a video encoder initialization module, specifically configured to:
acquiring expected coding parameters from the second display equipment, and initializing an original video encoder according to the expected coding parameters to obtain a standard video encoder;
the video encoding module 53 is specifically configured to:
inputting the target video frame to the standard video encoder;
And carrying out video coding on the target video frame through the standard video coder, and outputting the target coding frame.
Optionally, the desired encoding parameters include at least one of a desired frame rate, a desired bit rate, a desired resolution, and a desired frame size.
Optionally, the video encoding module 53 is specifically further configured to:
inputting the target video frame into an input buffer area corresponding to the standard video encoder;
and inputting the target video frames to the standard video encoder according to the input sequence of the target video frames in the input buffer.
Optionally, the video encoding module 53 is specifically further configured to:
inputting the target coding frame into an output buffer area corresponding to the standard video coder;
and transmitting the target coding frame to the second display device according to the input sequence of the target coding frame in the output buffer area.
The video data processing device 50 disclosed in the embodiments of the present disclosure may execute the video data processing method disclosed in the embodiments of the present disclosure, and has the corresponding functional modules and beneficial effects of the execution method. Reference may be made to the description of embodiments of the disclosed method for details not described in this embodiment.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 6 illustrates a schematic block diagram of an example electronic device 600 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 may also be stored. The computing unit 601, ROM 602, and RAM 603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Various components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, mouse, etc.; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 601 performs the respective methods and processes described above, for example, a processing method of video data. For example, in some embodiments, the method of processing video data may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When a computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the above-described processing method of video data may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the processing method of video data in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above can be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (25)

1. A method of processing video data, comprising:
acquiring target texture data corresponding to display content of the first display device, and determining a current rendering time interval according to the current time and the historical rendering time;
rendering the target texture data according to the expected rendering time interval and the current rendering time interval to obtain a target video frame; wherein the desired rendering time interval is determined from a desired frame rate of the second display device;
And carrying out video coding on the target video frame to obtain a target coding frame, and sending the target coding frame to the second display equipment for the second display equipment to decode the target coding frame.
2. The method of claim 1, wherein the obtaining target texture data corresponding to display content of the first display device comprises:
creating a virtual display device corresponding to the first display device, and acquiring current view data corresponding to current display content through the virtual display device;
if the current view data is different from the historical view data corresponding to the historical display content, carrying out texture drawing on the current view data in a first canvas to generate current texture data, and taking the current texture data as the target texture data;
the current display content is the display content of the first display device at the current moment, and the historical display content is the display content of the first display device at the last historical moment.
3. The method of claim 2, wherein the obtaining target texture data corresponding to the display content of the first display device comprises:
And taking historical texture data generated by last texture drawing as the target texture data when the expected rendering time interval is the same as the current rendering time interval and the current texture data which is not rendered does not exist.
4. A method according to claim 3, wherein said rendering the target texture data according to a desired rendering interval and the current rendering interval comprises:
and comparing the expected rendering time interval with the current rendering time interval, and rendering the target texture data according to a comparison result.
5. The method of claim 4, wherein the rendering the target texture data according to the comparison result comprises:
and rendering the target texture data under the condition that the expected rendering time interval is the same as the current rendering time interval.
6. The method of claim 5, wherein the rendering the target texture data if the desired rendering interval is the same as the current rendering interval comprises:
and under the condition that the expected rendering time interval is the same as the current rendering time interval and the target texture data is the current texture data, rendering the target texture data according to the texture drawing sequence of the target texture data.
7. The method of any of claims 2-6, wherein the rendering the target texture data comprises:
acquiring the target texture data from the first canvas, and rendering the target texture data in a second canvas; wherein the first canvas is different from the second canvas.
8. The method of claim 1, the prior to rendering the target texture data according to a desired rendering interval and the current rendering interval, further comprising:
acquiring expected coding parameters from the second display equipment, and initializing an original video encoder according to the expected coding parameters to obtain a standard video encoder;
the video encoding the target video frame to obtain a target encoded frame includes:
inputting the target video frame to the standard video encoder;
and carrying out video coding on the target video frame through the standard video coder, and outputting the target coding frame.
9. The method of claim 8, wherein the desired encoding parameters include at least one of a desired frame rate, a desired bit rate, a desired resolution, and a desired frame size.
10. The method of claim 8, wherein the inputting the target video frame to the standard video encoder comprises:
inputting the target video frame into an input buffer area corresponding to the standard video encoder;
and inputting the target video frames to the standard video encoder according to the input sequence of the target video frames in the input buffer.
11. The method of claim 8, wherein the transmitting the target encoded frame to the second display device comprises:
inputting the target coding frame into an output buffer area corresponding to the standard video coder;
and transmitting the target coding frame to the second display device according to the input sequence of the target coding frame in the output buffer area.
12. A processing apparatus for video data, comprising:
the texture data acquisition module is used for acquiring target texture data corresponding to the display content of the first display device and determining a current rendering time interval according to the current time and the historical rendering time;
the texture rendering module is used for rendering the target texture data according to the expected rendering time interval and the current rendering time interval to obtain a target video frame; wherein the desired rendering time interval is determined from a desired frame rate of the second display device;
And the video coding module is used for carrying out video coding on the target video frame to obtain a target coding frame, and sending the target coding frame to the second display equipment for the second display equipment to decode the target coding frame.
13. The apparatus of claim 12, wherein the texture data acquisition module is specifically configured to:
creating a virtual display device corresponding to the first display device, and acquiring current view data corresponding to current display content through the virtual display device;
if the current view data is different from the historical view data corresponding to the historical display content, carrying out texture drawing on the current view data in a first canvas to generate current texture data, and taking the current texture data as the target texture data;
the current display content is the display content of the first display device at the current moment, and the historical display content is the display content of the first display device at the last historical moment.
14. The apparatus of claim 13, wherein the texture data acquisition module is specifically configured to:
and taking historical texture data generated by last texture drawing as the target texture data when the expected rendering time interval is the same as the current rendering time interval and the current texture data which is not rendered does not exist.
15. The apparatus of claim 14, wherein the texture rendering module is specifically configured to:
and comparing the expected rendering time interval with the current rendering time interval, and rendering the target texture data according to a comparison result.
16. The apparatus of claim 15, wherein the texture rendering module is further specifically configured to:
and rendering the target texture data under the condition that the expected rendering time interval is the same as the current rendering time interval.
17. The apparatus of claim 16, wherein the texture rendering module is further specifically configured to:
and under the condition that the expected rendering time interval is the same as the current rendering time interval and the target texture data is the current texture data, rendering the target texture data according to the texture drawing sequence of the target texture data.
18. The apparatus according to any of claims 13-17, wherein the texture rendering module is further specifically configured to:
acquiring the target texture data from the first canvas, and rendering the target texture data in a second canvas; wherein the first canvas is different from the second canvas.
19. The apparatus of claim 12, further comprising a video encoder initialization module, in particular for:
acquiring expected coding parameters from the second display equipment, and initializing an original video encoder according to the expected coding parameters to obtain a standard video encoder;
the video coding module is specifically configured to:
inputting the target video frame to the standard video encoder;
and carrying out video coding on the target video frame through the standard video coder, and outputting the target coding frame.
20. The apparatus of claim 19, wherein the desired encoding parameters comprise at least one of a desired frame rate, a desired bit rate, a desired resolution, and a desired frame size.
21. The apparatus of claim 19, wherein the video encoding module is further specifically configured to:
inputting the target video frame into an input buffer area corresponding to the standard video encoder;
and inputting the target video frames to the standard video encoder according to the input sequence of the target video frames in the input buffer.
22. The apparatus of claim 19, wherein the video encoding module is further specifically configured to:
Inputting the target coding frame into an output buffer area corresponding to the standard video coder;
and transmitting the target coding frame to the second display device according to the input sequence of the target coding frame in the output buffer area.
23. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-11.
24. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-11.
25. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-11.
CN202310835048.3A 2023-07-07 2023-07-07 Video data processing method, device, equipment and medium Pending CN116866658A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310835048.3A CN116866658A (en) 2023-07-07 2023-07-07 Video data processing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310835048.3A CN116866658A (en) 2023-07-07 2023-07-07 Video data processing method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN116866658A true CN116866658A (en) 2023-10-10

Family

ID=88231761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310835048.3A Pending CN116866658A (en) 2023-07-07 2023-07-07 Video data processing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN116866658A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117459190A (en) * 2023-12-20 2024-01-26 中汽研(天津)汽车工程研究院有限公司 OTA communication method of heterogeneous central computing architecture

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117459190A (en) * 2023-12-20 2024-01-26 中汽研(天津)汽车工程研究院有限公司 OTA communication method of heterogeneous central computing architecture
CN117459190B (en) * 2023-12-20 2024-04-02 中汽研(天津)汽车工程研究院有限公司 OTA communication method of heterogeneous central computing architecture

Similar Documents

Publication Publication Date Title
EP4192015A1 (en) Video encoding method, video decoding method, apparatus, electronic device, storage medium, and computer program product
RU2506715C2 (en) Transmission of variable visual content
CN107529069A (en) A kind of video stream transmission method and device
US11212540B2 (en) Video data processing system
EP3410302B1 (en) Graphic instruction data processing method, apparatus
US20230050250A1 (en) Method and apparatus for encoding video, and storage medium
AU2020456664A1 (en) Reinforcement learning based rate control
CN116866658A (en) Video data processing method, device, equipment and medium
CN111343503B (en) Video transcoding method and device, electronic equipment and storage medium
US9335964B2 (en) Graphics server for remotely rendering a composite image and method of use thereof
CN114051067B (en) Image acquisition method, device, equipment and storage medium
CN113628311B (en) Image rendering method, image rendering device, electronic device, and storage medium
US20140327698A1 (en) System and method for hybrid graphics and text rendering and client computer and graphics processing unit incorporating the same
AU2018254570B2 (en) Systems and methods for deferred post-processes in video encoding
CN114245175A (en) Video transcoding method and device, electronic equipment and storage medium
US12112453B2 (en) Image display control device, transmitting device, image display control method, and program for generating a substitute image
CN116033235B (en) Data transmission method, digital person production equipment and digital person display equipment
CN115767149A (en) Video data transmission method and device
CN115278289A (en) Cloud application rendering video frame processing method and device
CN114422718A (en) Video conversion method and device, electronic equipment and storage medium
US20220076378A1 (en) Image transmission/reception system, image transmission apparatus, image reception apparatus, image transmission/reception method, and program
CN112954438A (en) Image processing method and device
CN114125135B (en) Video content presentation method and device, electronic equipment and storage medium
CN118573870B (en) Video coding method, device, equipment and storage medium
CN117938823A (en) Cloud game picture sharing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination