CN117221511A - Video processing method and device, storage medium and electronic equipment - Google Patents

Video processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN117221511A
CN117221511A CN202311471254.7A CN202311471254A CN117221511A CN 117221511 A CN117221511 A CN 117221511A CN 202311471254 A CN202311471254 A CN 202311471254A CN 117221511 A CN117221511 A CN 117221511A
Authority
CN
China
Prior art keywords
frame
augmented reality
state data
driving
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311471254.7A
Other languages
Chinese (zh)
Other versions
CN117221511B (en
Inventor
周志文
李中杰
朱宇翔
纪向晴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mapgoo Technology Co ltd
Original Assignee
Shenzhen Mapgoo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mapgoo Technology Co ltd filed Critical Shenzhen Mapgoo Technology Co ltd
Priority to CN202311471254.7A priority Critical patent/CN117221511B/en
Publication of CN117221511A publication Critical patent/CN117221511A/en
Application granted granted Critical
Publication of CN117221511B publication Critical patent/CN117221511B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the application discloses a video processing method, a device, a storage medium and electronic equipment, wherein the method comprises the following steps: the method comprises the steps of obtaining driving image frame data of a vehicle and driving state data corresponding to each image frame in the driving image frame data, selecting a key frame from the driving image frame data, obtaining first target driving state data corresponding to the key frame, constructing an augmented reality key frame according to the key frame and the first target driving state data, selecting a common frame from the driving image frame data according to the key frame, obtaining second target driving state data corresponding to the common frame, constructing an augmented reality common frame according to the common frame and the difference driving state data between the second target driving state data and the first target driving state data, performing video coding according to the augmented reality key frame and the augmented reality common frame, and generating a target augmented reality driving video stream. The application can realize the auxiliary driving function.

Description

Video processing method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a video processing method, a device, a storage medium, and an electronic apparatus.
Background
AR (Augmented Reality ) technology may superimpose virtual objects and/or virtual information into a real scene, such that the user obtains a sensory experience that exceeds reality, i.e., the user may perceive a scene in which real objects and virtual objects and/or virtual information are present at the same time.
In the process of driving a vehicle, due to complex road conditions and some limitations of a driver, the driver has difficulty in completely grasping driving information in the driving process of the vehicle, so that accidents can be caused. The AR technology is applied to vehicle driving, so that a driver can be helped to better master driving information in the driving process of the vehicle, the motor vehicle can be driven more safely, and accidents in the driving process of the vehicle are reduced. How to apply AR technology for assisted driving during driving of a vehicle by a driver is a problem.
Disclosure of Invention
The embodiment of the application provides a video processing method, a device, a storage medium and electronic equipment, which can display driving state data in a driving video picture acquired by a vehicle-mounted camera in a superimposed manner, so as to realize a driving assisting function.
In a first aspect, an embodiment of the present application provides a video processing method, including:
acquiring driving image frame data of a vehicle, wherein driving state data corresponding to each image frame in the driving image frame data;
selecting a key frame from the driving image frame data, and acquiring first target driving state data corresponding to the key frame;
constructing an augmented reality key frame according to the key frame and the first target driving state data;
selecting a common frame from the driving image frame data according to the key frame, and acquiring second target driving state data corresponding to the common frame;
comparing the difference between the second target running state data and the first target running state data to obtain difference running state data;
constructing an augmented reality ordinary frame according to the ordinary frame and the differential driving state data;
and carrying out video coding according to the augmented reality key frame and the augmented reality common frame to generate a target augmented reality driving video stream.
In some embodiments, the constructing an augmented reality key frame from the key frame and the first target driving state data includes:
encoding the first target running state data to generate first key value and structural data;
converting the first key value structure data into a JSON format to obtain first target running state data in the JSON format;
compressing the JSON format first target running state data to obtain temporary binary stream data;
encrypting the temporary binary stream data to generate a first binary stream to be loaded;
and adding the first binary stream data to be loaded to the NAL unit of the SEI type corresponding to the key frame to obtain the augmented reality key frame.
In some embodiments, the comparing the difference between the second target driving state data and the first target driving state data to obtain difference driving state data includes:
encoding the second target running state data to generate second key values and structural data;
converting the second key value structure data into a JSON format to obtain second target running state data in the JSON format;
and comparing the second target running state data in the JSON format with the first target running state data in the JSON format to obtain the difference running state data.
In some embodiments, the constructing an augmented reality normal frame from the normal frame and the differential driving state data includes:
compressing the differential driving state data to generate a second binary stream to be loaded;
and adding the second binary stream to be loaded into the NAL unit of the SEI type corresponding to the common frame to obtain the augmented reality common frame.
In some embodiments, after video encoding according to the augmented reality key frame and the augmented reality normal frame, generating a target augmented reality driving video stream further includes:
receiving a play instruction of the target augmented reality driving video stream;
when the play instruction indicates to display the driving state data, analyzing the data in the NAL unit of the SEI type in the target augmented reality driving video stream;
and when the play instruction indicates that the running state data is not displayed, the data in the SEI type NAL unit in the target augmented reality running video stream is not analyzed.
In some embodiments, the generating the target augmented reality driving video stream according to the video encoding of the augmented reality key frame and the augmented reality normal frame includes:
video encoding the augmented reality key frame and the augmented reality common frame based on an AVC video encoding standard to generate the target augmented reality driving video stream;
or,
and carrying out video coding on the augmented reality key frame and the augmented reality common frame based on an HEVC video coding standard to generate the target augmented reality driving video stream.
In some embodiments, the selecting a key frame from the driving image frame data includes:
selecting an image frame from the driving image frame data at fixed time intervals as the key frame;
or,
judging whether difference information exists between two continuous image frames in the driving image frame data;
and when the difference information exists, taking the image frame of the next frame of the two continuous image frames as the key frame.
In a second aspect, an embodiment of the present application further provides a video processing apparatus, including:
the driving image frame data are used for acquiring driving image frame data of the vehicle and driving state data corresponding to each image frame in the driving image frame data;
the first selecting unit is used for selecting a key frame from the driving image frame data and acquiring first target driving state data corresponding to the key frame;
the first construction unit is used for constructing an augmented reality key frame according to the key frame and the first target driving state data;
the second selecting unit is used for selecting a common frame from the driving image frame data according to the key frame and acquiring second target driving state data corresponding to the common frame;
a comparison unit, configured to compare a difference between the second target running state data and the first target running state data to obtain difference running state data;
the second construction unit is used for constructing an augmented reality ordinary frame according to the ordinary frame and the differential driving state data;
and the video coding unit is used for carrying out video coding according to the augmented reality key frame and the augmented reality common frame to generate a target augmented reality driving video stream.
In a third aspect, embodiments of the present application also provide a computer-readable storage medium having stored thereon a computer program which, when run on a computer, causes the computer to perform a video processing method as provided in any of the embodiments of the present application.
In a fourth aspect, an embodiment of the present application further provides an electronic device, including a processor and a memory, where the memory has a computer program, and the processor is configured to execute the video processing method provided in any embodiment of the present application by calling the computer program.
According to the technical scheme provided by the embodiment of the application, the driving image frame data of the vehicle and the driving state data corresponding to each image frame in the driving image frame data are obtained, the key frame is selected from the driving image frame data, the first target driving state data corresponding to the key frame is obtained, the augmented reality key frame is constructed according to the key frame and the first target driving state data, the common frame is selected from the driving image frame data according to the key frame, the second target driving state data corresponding to the common frame is obtained, the difference between the second target driving state data and the first target driving state data is compared, the difference driving state data is obtained, the augmented reality common frame is constructed according to the common frame and the difference driving state data, the video encoding is carried out according to the augmented reality key frame and the augmented reality common frame, and the target augmented reality driving video stream is generated. According to the application, the driving state data can be displayed in a superimposed manner in the driving video picture acquired by the vehicle-mounted camera, and the driving state data prompts a user to drive, so that the auxiliary driving function is realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a video processing method according to an embodiment of the present application.
Fig. 2 is a schematic diagram of an SEI NAL structure of an AVC including an augmented reality frame in a video processing method according to an embodiment of the present application.
Fig. 3 is a schematic diagram of an SEI NAL structure of an HEVC including an augmented reality frame in a video processing method according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by a person skilled in the art without any inventive effort, are intended to be within the scope of the present application based on the embodiments of the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The embodiment of the application provides a video processing method, and an execution subject of the video processing method can be the video processing device provided by the embodiment of the application or an electronic device integrated with the video processing device, wherein the video processing device can be realized in a hardware or software mode.
Referring to fig. 1, fig. 1 is a flowchart of a video processing method according to an embodiment of the application. The specific flow of the video processing method provided by the embodiment of the application can be as follows:
s110, acquiring driving image frame data of the vehicle, and driving state data corresponding to each image frame in the driving image frame data.
In the embodiment of the application, the image data and the sound data in the driving process can be recorded through the vehicle-mounted camera arranged on the vehicle. Wherein the picture data is the driving image frame data in the application.
The driving state data refers to various data information generated by the vehicle during driving, including but not limited to vehicle speed, acceleration, steering angle, braking state, accelerator pedal position, engine speed, GPS, identified vehicle, pedestrian area, safety distance and the like. The driving state data may be collected by related sensors provided in the vehicle.
In this embodiment, the vehicle-mounted camera acquires the driving image frame data of the vehicle, and simultaneously acquires the driving state data of the vehicle through the vehicle-mounted sensor, and each frame of image frame has the corresponding driving state data acquired synchronously.
S120, selecting a key frame from the driving image frame data, and acquiring first target driving state data corresponding to the key frame.
In this embodiment, a key frame is selected from driving image frame data, and first target driving state data corresponding to the key frame is obtained. The selection of the key frame can refer to the related technology.
In some embodiments, selecting a key frame from the driving image frame data includes:
and selecting an image frame from the driving image frame data at fixed time intervals as the key frame.
Wherein, the specific value of the fixed time interval can be set according to the actual requirement,
the key frames are selected at fixed time intervals. For example, a key frame is selected at regular intervals (e.g., 1 second).
In some embodiments, judging whether difference information exists between two continuous frames of image frames in the driving image frame data;
and when the difference information exists, taking the image frame of the next frame of the two continuous image frames as the key frame.
In this embodiment, the selection of the key frame is performed according to whether there is difference information between two consecutive image frames, that is, the key frame is selected according to the scene change condition in the image frames. For example, by analyzing information such as pixel differences and motion vectors between two consecutive image frames, the location of scene changes can be determined, and key frames can be selected at these locations.
S130, constructing an augmented reality key frame according to the key frame and the first target driving state data.
In some embodiments, the constructing an augmented reality key frame from the key frame and the first target driving state data includes:
encoding the first target running state data to generate first key value and structural data;
converting the first key value structure data into a JSON format to obtain first target running state data in the JSON format;
compressing the JSON format first target running state data to obtain temporary binary stream data;
encrypting the temporary binary stream data to generate a first binary stream to be loaded;
and adding the first binary stream data to be loaded to the NAL unit of the SEI type corresponding to the key frame to obtain the augmented reality key frame.
Among them, NAL (Network Abstraction Layer) is a format used to encapsulate video encoded data in AVC/HEVC coding specifications. NAL units are basic building blocks in an AVC/HEVC video stream. SEI-type NALs can be used to store enhancement information, such as: the libx264 coder, by default, generates an H264 code stream which contains an SEI type NAL, the payload content is a string "x264-core 138-h.264.", and all players can automatically skip the SEI type NAL when decoding, so that the video playback device is perfectly compatible.
In this embodiment, the first target running state data may be encoded into nested key-value structured data, that is, the first key-value structured data is obtained, then the first key-value structured data is converted into JSON format, so as to obtain JSON format first target running state data, the JSON format first target running state data is stored in a preset buffer space, so that the data is used subsequently, then the JSON format first target running state data is compressed by a compression algorithm, so as to obtain temporary binary stream data, the temporary binary stream data is encrypted, the first binary stream to be loaded is generated, and then the first binary stream to be loaded is added into an SEI type NAL unit corresponding to the key frame, so as to construct the augmented reality key frame.
The compression algorithm can be gzip/7z algorithm and the like. The encryption process may employ an RSA private key encryption signature process.
And S140, selecting a common frame from the driving image frame data according to the key frame, and acquiring second target driving state data corresponding to the common frame.
Where normal frames refer to non-key frames in a video sequence.
In this embodiment, after the key frame is selected, a normal frame may be selected from the driving image frame data according to the key frame, and the second target driving state data corresponding to the normal frame may be obtained according to the normal frame.
S150, comparing the difference between the second target running state data and the first target running state data to obtain difference running state data.
In some embodiments, the comparing the difference between the second target driving state data and the first target driving state data to obtain difference driving state data includes:
encoding the second target running state data to generate second key values and structural data;
converting the second key value structure data into a JSON format to obtain second target running state data in the JSON format;
and comparing the second target running state data in the JSON format with the first target running state data in the JSON format to obtain the difference running state data.
In this embodiment, the second target running state number is also encoded into nested key-value structured data, that is, the second key-value structured data is obtained, then the second key-value structured data is converted into JSON format, so as to obtain the second target running state data in JSON format, and the first target running state data in JSON format is obtained in a preset buffer space, and is compared, so as to obtain the differential running state data.
S160, constructing an augmented reality ordinary frame according to the ordinary frame and the differential driving state data.
In some embodiments, the constructing an augmented reality normal frame from the normal frame and the differential driving state data includes:
compressing the differential driving state data to generate a second binary stream to be loaded;
and adding the second binary stream to be loaded into the NAL unit of the SEI type corresponding to the common frame to obtain the augmented reality common frame.
In this embodiment, the differential driving state data is compressed by a compression algorithm to obtain the second binary stream to be loaded, and the second binary stream to be loaded is added to the NAL unit of the SEI type corresponding to the normal frame, so as to construct the augmented reality normal frame.
The compression algorithm can be gzip/7z algorithm and the like.
S170, video encoding is carried out according to the augmented reality key frame and the augmented reality common frame, and a target augmented reality driving video stream is generated.
In this embodiment, after the augmented reality key frame and the augmented reality normal frame are constructed, video encoding is performed according to the augmented reality key frame and the augmented reality normal frame, so as to generate the target augmented reality driving video stream.
In some embodiments, the generating the target augmented reality driving video stream according to the video encoding of the augmented reality key frame and the augmented reality normal frame includes: video encoding the augmented reality key frame and the augmented reality normal frame based on an AVC (Advanced Video Coding) advanced video coding) video coding standard to generate the target augmented reality driving video stream;
in some embodiments, the generating the target augmented reality driving video stream according to the video encoding of the augmented reality key frame and the augmented reality normal frame includes: and carrying out video coding on the augmented reality key frame and the augmented reality common frame based on an HEVC (High Efficiency Video Coding ) video coding standard to generate the target augmented reality driving video stream.
It can be appreciated that the target augmented reality driving video stream includes driving state data, and when the target augmented reality driving video stream is played, the driving state data can be displayed in a driving picture.
In particular, the application is not limited by the order of execution of the steps described, as some of the steps may be performed in other orders or concurrently without conflict.
As can be seen from the above, the video processing method provided by the embodiment of the application can superimpose and display the driving state data in the driving video picture acquired by the vehicle-mounted camera, and prompt the user to drive through the driving state data, thereby realizing the driving assistance function.
In some embodiments, after video encoding according to the augmented reality key frame and the augmented reality normal frame, generating a target augmented reality driving video stream further includes:
receiving a play instruction of the target augmented reality driving video stream;
when the play instruction indicates to display the driving state data, analyzing the data in the NAL unit of the SEI type in the target augmented reality driving video stream;
and when the play instruction indicates that the running state data is not displayed, the data in the SEI type NAL unit in the target augmented reality running video stream is not analyzed.
It can be understood that when the target augmented reality driving video stream is played, the method comprises two playing effects, namely that driving state data are displayed in a driving picture and that driving state data are not displayed in the driving picture. When the play instruction indicates to display the driving state data, analyzing the data in the NAL unit of the SEI type in the target augmented reality driving video stream; when the play instruction indicates that the driving state data is not displayed, the data in the NAL unit of the SEI type in the target augmented reality driving video stream is not analyzed. The application provides two different playing instructions to meet the needs of different crowds.
In the application, the driving state data acquired by the vehicle-mounted sensor in the vehicle is taken as the augmented reality metadata, the augmented reality metadata is encoded/compressed/encrypted to construct an augmented reality key frame and an augmented reality common frame, and the specific construction process can generally comprise the following steps:
(1) The collected augmented reality metadata is encoded into nested key-value structured data, and the original JSON format is stored.
(2) An augmented reality key frame is constructed periodically, and can be analyzed independently.
(1) Using a compression algorithm (gzip/7 z, etc.), the original JSON is compressed into a temporary binary stream and buffered for use (augmented reality normal frame reference).
(2) The temporary binary stream is encrypted and signed by using an RSA private key to generate a binary stream to be loaded.
(3) Continuously constructing the augmented reality normal frame requires referencing the previous augmented reality key frame.
(1) And (3) referring to the original JSON character string and the cached JSON character string, comparing the difference of the original JSON character string and the cached JSON character string, and calculating to obtain the difference JSON character string.
(2) Using a compression algorithm (gzip/7 z, etc.), the differential JSON is compressed into the binary stream to be loaded.
It should be noted that, in the present application, AVC/HEVC uses a user_data_unregistered () type SEI to carry augmented reality metadata, the SEI type identifier is 0x05, uuid is used to identify a specific custom service, data (head+body) needs anti-contention processing, a fixed header of two bytes of the head and an extension header of maximum additional 4 bytes form, content details are identified, and body stores an augmented reality frame.
The head composition is as follows:
the/version (2 bits) +frame type (2 bits) +compression type (4 bits) +encryption type (4 bits) +header extension (4 bits)
Version: fixing-01
Frame type: augmented reality normal frame-00, augmented reality key frame-01
Compression type: uncompressed-0000, gzip compression-0001
Encryption type: unencrypted-0000, RSA encrypted-0001
Head expansion: un-extended-0000, extended 1 byte-0001 to extended 4 bytes-1111
In the embodiment of the application, the SEI NAL structure of the AVC containing the augmented reality frame is shown in figure 2, and the SEI NAL structure of the HEVC containing the augmented reality frame is shown in figure 3.
In the embodiment of the application, when the target augmented reality driving video stream is played:
(1) During live broadcast or playback, a user opens the function setting of the augmented reality layer of the player, and the player identifies an SEI NAL in the code stream according to the service UUID.
(2) And recognizing the augmented reality key frame according to the head of the payload, decrypting/decompressing, outputting the original JSON of the key frame, and caching the original JSON of the key frame.
(3) And recognizing the augmented reality normal frame according to the head of the payload, decompressing, outputting a difference JSON, and calculating and restoring the original JSON of the current normal frame by combining the cached key frame original JSON.
(4) Creating a rendering layer, fusing augmented reality data
(1) And reading an interest area (a vehicle or a pedestrian) in the original JSON, framing a transparent layer by using a rectangular frame, and marking the distance security level by using colors, wherein the rectangular frame marks data such as object type/distance/speed/direction.
(2) And reading the sensor data of the vehicle in the original JSON, and generating a virtual small map, a virtual instrument panel, road prediction, driving trend and the like in the transparent layer.
1) Virtual minimap: according to the GPS data combined with the map system, the running position and track of the vehicle can be intuitively displayed.
2) Virtual machine dashboard: the current vehicle speed/acceleration/braking/oil amount/temperature, etc. can be seen to restore the true vehicle state.
3) Road prediction in advance: in combination with the navigation of the vehicle and the map data, turning/ascending/descending/changing the road, etc., the prompt appears in advance on the screen.
4) Exercise trend shows: the driver's current operation, a prompt of the direction in which the vehicle is actually running.
(5) And the rendering layer is fused and rendered into the original video picture, so that the driving environment at the time can be completely restored.
(6) The augmented reality layer is rich in information, and a user can customize which information display and layout positions of partial components are adjusted at any time.
(7) And a plurality of skin topics are designed, and a user can switch at any time according to preference.
In summary, the scheme provided by the application has the following advantages:
1. the live broadcast/playback/real-time preview method is suitable for various scenes at the same time, and can be seamlessly abutted with a third-party live broadcast platform/recording broadcast platform, so that the compatibility is high.
2. The integration level of the augmented reality metadata is high, various data aggregation can highly restore the scene, and the augmented reality metadata and the original picture are naturally synchronous.
3. The real-time superposition rendering of the augmented reality data has high flexibility and good effect, and the user can select focus of attention to avoid invalid interference.
4. The data volume compression rate of the augmented reality data is high, the transmission and the storage are convenient, the data is protected safely, and information leakage and tampering can be effectively avoided.
A video processing apparatus is also provided in an embodiment. Referring to fig. 4, fig. 4 is a schematic structural diagram of a video processing apparatus 200 according to an embodiment of the application. The video processing apparatus 200 is applied to an electronic device, where the video processing apparatus 200 includes an acquisition unit 201, a first selection unit 202, a first construction unit 203, a second selection unit 204, a comparison unit 205, a second construction unit 206, and a video encoding unit 207, as follows:
an acquiring unit 201, configured to acquire driving image frame data of a vehicle, and driving state data corresponding to each image frame in the driving image frame data;
a first selecting unit 202, configured to select a key frame from the driving image frame data, and obtain first target driving state data corresponding to the key frame;
a first construction unit 203, configured to construct an augmented reality key frame according to the key frame and the first target driving state data;
a second selecting unit 204, configured to select a common frame from the driving image frame data according to the key frame, and obtain second target driving state data corresponding to the common frame;
a comparing unit 205, configured to compare the difference between the second target running state data and the first target running state data to obtain difference running state data;
a second construction unit 206, configured to construct an augmented reality normal frame according to the normal frame and the differential driving state data;
the video encoding unit 207 is configured to perform video encoding according to the augmented reality key frame and the augmented reality normal frame, and generate a target augmented reality driving video stream.
In some embodiments, the first construction unit 203 may be configured to:
encoding the first target running state data to generate first key value and structural data;
converting the first key value structure data into a JSON format to obtain first target running state data in the JSON format;
compressing the JSON format first target running state data to obtain temporary binary stream data;
encrypting the temporary binary stream data to generate a first binary stream to be loaded;
and adding the first binary stream data to be loaded to the NAL unit of the SEI type corresponding to the key frame to obtain the augmented reality key frame.
In some embodiments, the comparison unit 205 may be configured to:
encoding the second target running state data to generate second key values and structural data;
converting the second key value structure data into a JSON format to obtain second target running state data in the JSON format;
and comparing the second target running state data in the JSON format with the first target running state data in the JSON format to obtain the difference running state data.
In some embodiments, the second building element 206 may be configured to:
compressing the differential driving state data to generate a second binary stream to be loaded;
and adding the second binary stream to be loaded into the NAL unit of the SEI type corresponding to the common frame to obtain the augmented reality common frame.
In some embodiments, the video processing apparatus 200 may further include a video playing unit, which may be configured to:
receiving a play instruction of the target augmented reality driving video stream;
when the play instruction indicates to display the driving state data, analyzing the data in the NAL unit of the SEI type in the target augmented reality driving video stream;
and when the play instruction indicates that the running state data is not displayed, the data in the SEI type NAL unit in the target augmented reality running video stream is not analyzed.
In some embodiments, the video encoding unit 207 may be configured to:
video encoding the augmented reality key frame and the augmented reality common frame based on an AVC video encoding standard to generate the target augmented reality driving video stream;
or,
and carrying out video coding on the augmented reality key frame and the augmented reality common frame based on an HEVC video coding standard to generate the target augmented reality driving video stream.
In some embodiments, the first selection unit 202 may be configured to:
selecting an image frame from the driving image frame data at fixed time intervals as the key frame;
or,
judging whether difference information exists between two continuous image frames in the driving image frame data;
and when the difference information exists, taking the image frame of the next frame of the two continuous image frames as the key frame.
It should be noted that, the video processing apparatus provided in the embodiment of the present application belongs to the same concept as the video processing method in the above embodiment, and any method provided in the embodiment of the video processing method may be implemented by using the video processing apparatus, and detailed implementation processes of the video processing method embodiment are described in detail herein and are not repeated herein.
In addition, in order to better implement the video processing method according to the embodiment of the present application, the present application further provides an electronic device based on the video processing method, referring to fig. 5, fig. 5 shows a schematic structural diagram of an electronic device 300 provided by the present application, and as shown in fig. 5, the electronic device 300 provided by the present application includes a processor 301 and a memory 302, where the processor 301 is configured to implement steps of the video processing method according to the above embodiment of the present application when executing a computer program stored in the memory 302, for example:
acquiring driving image frame data of a vehicle, wherein driving state data corresponding to each image frame in the driving image frame data;
selecting a key frame from the driving image frame data, and acquiring first target driving state data corresponding to the key frame;
constructing an augmented reality key frame according to the key frame and the first target driving state data;
selecting a common frame from the driving image frame data according to the key frame, and acquiring second target driving state data corresponding to the common frame;
comparing the difference between the second target running state data and the first target running state data to obtain difference running state data;
constructing an augmented reality ordinary frame according to the ordinary frame and the differential driving state data;
and carrying out video coding according to the augmented reality key frame and the augmented reality common frame to generate a target augmented reality driving video stream.
By way of example, a computer program may be partitioned into one or more modules/units that are stored in memory 302 and executed by processor 301 to accomplish an embodiment of the application. One or more of the modules/units may be a series of computer program instruction segments capable of performing particular functions to describe the execution of the computer program in a computer device.
Electronic device 300 may include, but is not limited to, a processor 301, a memory 302. It will be appreciated by those skilled in the art that the illustration is merely an example of the electronic device 300 and is not limiting of the electronic device 300, and may include more or fewer components than shown, or may combine some of the components, or different components, e.g., the electronic device 300 may further include an input-output device, a network access device, a bus, etc., through which the processor 301, the memory 302, the input-output device, the network access device, etc., are connected.
The processor 301 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or any conventional processor or the like that is a control center of the electronic device 300 that interfaces and lines to various portions of the overall electronic device 300.
The memory 302 may be used to store computer programs and/or modules, and the processor 301 implements various functions of the computer device by running or executing the computer programs and/or modules stored in the memory 302 and invoking data stored in the memory 302. The memory 302 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, video data, etc.) created according to the use of the electronic device 300, and the like. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
It will be clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the video processing apparatus, the electronic device 300 and the corresponding units thereof described above may refer to the description of the video processing method in the above embodiment of the present application, and the detailed description thereof will not be repeated here.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer readable storage medium having stored therein a plurality of instructions capable of being loaded by a processor to perform the steps of the video processing method in the above embodiment of the present application, for example:
acquiring driving image frame data of a vehicle, wherein driving state data corresponding to each image frame in the driving image frame data;
selecting a key frame from the driving image frame data, and acquiring first target driving state data corresponding to the key frame;
constructing an augmented reality key frame according to the key frame and the first target driving state data;
selecting a common frame from the driving image frame data according to the key frame, and acquiring second target driving state data corresponding to the common frame;
comparing the difference between the second target running state data and the first target running state data to obtain difference running state data;
constructing an augmented reality ordinary frame according to the ordinary frame and the differential driving state data;
and carrying out video coding according to the augmented reality key frame and the augmented reality common frame to generate a target augmented reality driving video stream.
The specific operation may refer to the description of the video processing method in the above embodiments of the present application, and will not be repeated here.
Wherein the computer-readable storage medium may comprise: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
Since the instructions stored in the computer readable storage medium can execute the steps in the video processing method in the above embodiment of the present application, the beneficial effects that can be achieved by the video processing method in the above embodiment of the present application can be achieved, and detailed descriptions are omitted herein.
Furthermore, the terms "first," "second," and "third," and the like, herein, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to the particular steps or modules listed and certain embodiments may include additional steps or modules not listed or inherent to such process, method, article, or apparatus.
The video processing method, apparatus, electronic device and storage medium provided by the present application have been described in detail, and specific examples are applied to illustrate the principles and embodiments of the present application, and the description of the above examples is only used to help understand the method and core idea of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (10)

1. A video processing method, comprising:
acquiring driving image frame data of a vehicle, wherein driving state data corresponding to each image frame in the driving image frame data;
selecting a key frame from the driving image frame data, and acquiring first target driving state data corresponding to the key frame;
constructing an augmented reality key frame according to the key frame and the first target driving state data;
selecting a common frame from the driving image frame data according to the key frame, and acquiring second target driving state data corresponding to the common frame;
comparing the difference between the second target running state data and the first target running state data to obtain difference running state data;
constructing an augmented reality ordinary frame according to the ordinary frame and the differential driving state data;
and carrying out video coding according to the augmented reality key frame and the augmented reality common frame to generate a target augmented reality driving video stream.
2. The video processing method of claim 1, wherein the constructing an augmented reality key frame from the key frame and the first target driving state data comprises:
encoding the first target running state data to generate first key value and structural data;
converting the first key value structure data into a JSON format to obtain first target running state data in the JSON format;
compressing the JSON format first target running state data to obtain temporary binary stream data;
encrypting the temporary binary stream data to generate a first binary stream to be loaded;
and adding the first binary stream data to be loaded to the NAL unit of the SEI type corresponding to the key frame to obtain the augmented reality key frame.
3. The video processing method according to claim 2, wherein the comparing the difference between the second target running state data and the first target running state data to obtain difference running state data includes:
encoding the second target running state data to generate second key values and structural data;
converting the second key value structure data into a JSON format to obtain second target running state data in the JSON format;
and comparing the second target running state data in the JSON format with the first target running state data in the JSON format to obtain the difference running state data.
4. The video processing method according to claim 3, wherein the constructing an augmented reality normal frame from the normal frame and the differential driving state data includes:
compressing the differential driving state data to generate a second binary stream to be loaded;
and adding the second binary stream to be loaded into the NAL unit of the SEI type corresponding to the common frame to obtain the augmented reality common frame.
5. The video processing method of claim 4, further comprising, after generating a target augmented reality driving video stream from video encoding of the augmented reality key frame and the augmented reality normal frame:
receiving a play instruction of the target augmented reality driving video stream;
when the play instruction indicates to display the driving state data, analyzing the data in the NAL unit of the SEI type in the target augmented reality driving video stream;
and when the play instruction indicates that the running state data is not displayed, the data in the SEI type NAL unit in the target augmented reality running video stream is not analyzed.
6. The video processing method of claim 1, wherein the generating the target augmented reality driving video stream according to the video encoding of the augmented reality key frame and the augmented reality normal frame comprises:
video encoding the augmented reality key frame and the augmented reality common frame based on an AVC video encoding standard to generate the target augmented reality driving video stream;
or,
and carrying out video coding on the augmented reality key frame and the augmented reality common frame based on an HEVC video coding standard to generate the target augmented reality driving video stream.
7. The video processing method of claim 1, wherein selecting a key frame from the running image frame data comprises:
selecting an image frame from the driving image frame data at fixed time intervals as the key frame;
or,
judging whether difference information exists between two continuous image frames in the driving image frame data;
and when the difference information exists, taking the image frame of the next frame of the two continuous image frames as the key frame.
8. A video processing apparatus, comprising:
the driving image frame data are used for acquiring driving image frame data of the vehicle and driving state data corresponding to each image frame in the driving image frame data;
the first selecting unit is used for selecting a key frame from the driving image frame data and acquiring first target driving state data corresponding to the key frame;
the first construction unit is used for constructing an augmented reality key frame according to the key frame and the first target driving state data;
the second selecting unit is used for selecting a common frame from the driving image frame data according to the key frame and acquiring second target driving state data corresponding to the common frame;
a comparison unit, configured to compare a difference between the second target running state data and the first target running state data to obtain difference running state data;
the second construction unit is used for constructing an augmented reality ordinary frame according to the ordinary frame and the differential driving state data;
and the video coding unit is used for carrying out video coding according to the augmented reality key frame and the augmented reality common frame to generate a target augmented reality driving video stream.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when run on a computer, causes the computer to perform the video processing method according to any one of claims 1 to 7.
10. An electronic device comprising a processor and a memory, the memory storing a computer program, characterized in that the processor is adapted to perform the video processing method according to any of claims 1 to 7 by invoking the computer program.
CN202311471254.7A 2023-11-07 2023-11-07 Video processing method and device, storage medium and electronic equipment Active CN117221511B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311471254.7A CN117221511B (en) 2023-11-07 2023-11-07 Video processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311471254.7A CN117221511B (en) 2023-11-07 2023-11-07 Video processing method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN117221511A true CN117221511A (en) 2023-12-12
CN117221511B CN117221511B (en) 2024-03-12

Family

ID=89042939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311471254.7A Active CN117221511B (en) 2023-11-07 2023-11-07 Video processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN117221511B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180376180A1 (en) * 2015-12-29 2018-12-27 Thomson Licensing Method and apparatus for metadata insertion pipeline for streaming media
CN111429517A (en) * 2020-03-23 2020-07-17 Oppo广东移动通信有限公司 Relocation method, relocation device, storage medium and electronic device
CN111986261A (en) * 2020-08-13 2020-11-24 清华大学苏州汽车研究院(吴江) Vehicle positioning method and device, electronic equipment and storage medium
US20210374972A1 (en) * 2019-02-20 2021-12-02 Huawei Technologies Co., Ltd. Panoramic video data processing method, terminal, and storage medium
CN114742977A (en) * 2022-03-30 2022-07-12 青岛虚拟现实研究院有限公司 Video perspective method based on AR technology
US11529968B1 (en) * 2022-06-07 2022-12-20 Robert A. Schaeffer Method and system for assisting drivers in locating objects that may move into their vehicle path
CN116071949A (en) * 2023-04-03 2023-05-05 北京永泰万德信息工程技术有限公司 Augmented reality method and device for driving assistance

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180376180A1 (en) * 2015-12-29 2018-12-27 Thomson Licensing Method and apparatus for metadata insertion pipeline for streaming media
US20210374972A1 (en) * 2019-02-20 2021-12-02 Huawei Technologies Co., Ltd. Panoramic video data processing method, terminal, and storage medium
CN111429517A (en) * 2020-03-23 2020-07-17 Oppo广东移动通信有限公司 Relocation method, relocation device, storage medium and electronic device
CN111986261A (en) * 2020-08-13 2020-11-24 清华大学苏州汽车研究院(吴江) Vehicle positioning method and device, electronic equipment and storage medium
CN114742977A (en) * 2022-03-30 2022-07-12 青岛虚拟现实研究院有限公司 Video perspective method based on AR technology
US11529968B1 (en) * 2022-06-07 2022-12-20 Robert A. Schaeffer Method and system for assisting drivers in locating objects that may move into their vehicle path
CN116071949A (en) * 2023-04-03 2023-05-05 北京永泰万德信息工程技术有限公司 Augmented reality method and device for driving assistance

Also Published As

Publication number Publication date
CN117221511B (en) 2024-03-12

Similar Documents

Publication Publication Date Title
JP6984841B2 (en) Image processing method, terminal and server
KR100609614B1 (en) Method and apparatus for transmitting picture data, processing pictures and recording medium therefor
US6104837A (en) Image data compression for interactive applications
JP2019216405A (en) Video recording and playback system and method
US9055272B2 (en) Moving image reproduction apparatus, information processing apparatus, and moving image reproduction method
CN113891117B (en) Immersion medium data processing method, device, equipment and readable storage medium
CN110741649B (en) Method and device for synthesizing track
CN102158733A (en) Method for loading auxiliary video supplementary information, processing method, device and system
CN112019907A (en) Live broadcast picture distribution method, computer equipment and readable storage medium
CN116744007A (en) Encoding and decoding method of point cloud media and related products
JP3726239B1 (en) Image processing program and apparatus
JP2000187478A (en) Picture processor and picture processing method
CN117221511B (en) Video processing method and device, storage medium and electronic equipment
CN113542907B (en) Multimedia data transceiving method, system, processor and player
CN111225293B (en) Video data processing method and device and computer storage medium
CN115801983A (en) Image superposition method and device and electronic equipment
US11570419B2 (en) Telematics and environmental data of live events in an extended reality environment
CN112423108B (en) Method and device for processing code stream, first terminal, second terminal and storage medium
CN115002470A (en) Media data processing method, device, equipment and readable storage medium
GB2602642A (en) Method and apparatus for encapsulating uncompressed video data into a file
CN116508323A (en) Transparency range for volumetric video
CN116101172A (en) Vehicle entertainment information domain controller and driving record generation method
CN114581631A (en) Data processing method and device for immersive media and computer-readable storage medium
JP2022503963A (en) Coding and decoding of omnidirectional video
CN113037947A (en) Method for coding spatial information in continuous dynamic image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant