CN115733940B - Multi-source heterogeneous video processing display device and method for ship system - Google Patents

Multi-source heterogeneous video processing display device and method for ship system Download PDF

Info

Publication number
CN115733940B
CN115733940B CN202211377465.XA CN202211377465A CN115733940B CN 115733940 B CN115733940 B CN 115733940B CN 202211377465 A CN202211377465 A CN 202211377465A CN 115733940 B CN115733940 B CN 115733940B
Authority
CN
China
Prior art keywords
layer
video
unit
superposition
radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211377465.XA
Other languages
Chinese (zh)
Other versions
CN115733940A (en
Inventor
龙小军
张正华
万凯
胡硕
郭浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
709th Research Institute of CSSC
Original Assignee
709th Research Institute of CSSC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 709th Research Institute of CSSC filed Critical 709th Research Institute of CSSC
Priority to CN202211377465.XA priority Critical patent/CN115733940B/en
Publication of CN115733940A publication Critical patent/CN115733940A/en
Application granted granted Critical
Publication of CN115733940B publication Critical patent/CN115733940B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Controls And Circuits For Display Device (AREA)

Abstract

The invention provides a multi-source heterogeneous video processing display device and a method for a ship system, which belong to the field of video data processing, wherein a protocol unloading unit acquires uncompressed original radar video data and photoelectric video data; the video frame format encapsulation unit forms an original video layer; the radar scan conversion unit forms a radar video layer; the pattern distribution unit forms a pattern bottom layer; the layer separation unit forms a mouse plotting layer; the primary fusion superposition unit processes the data of the radar video layer and the mouse plotting layer to form a primary fusion superposition layer; the secondary fusion superposition unit processes the graph bottom layer and the primary fusion superposition layer to form a secondary fusion superposition layer; the encoding and decoding unit forms a compressed video layer; and the video comprehensive processing unit performs video superposition processing on the secondary fusion superposition layer, the original video layer and the compressed video layer and outputs and displays the video. The invention not only can improve the display quality and stability, but also can realize the information sharing of each display control console under the condition of reducing the coding equipment.

Description

Multi-source heterogeneous video processing display device and method for ship system
Technical Field
The invention belongs to the field of video data processing, and particularly relates to a multi-source heterogeneous video processing display device and method for a ship system.
Background
The carrier-based display console is a multi-source access device which can be accessed to optical frequency/radio frequency, satellite communication, analog/digital and other signal sources, and comprises various video or media resources including radar, sonar, infrared, photoelectricity and the like; the ship-based display control terminal is required to complete the superposition of graphics and image videos, the fusion transparent processing of graphics and radar videos, and the display of multiple layers and multiple windows such as a plotting layer and a mouse layer according to the requirements of a combat system.
The same window display of the multi-source heterogeneous video is generally realized by using an FPGA embedded platform, or the aim is achieved by using CPU calculation power through software programming. The former not only can complete radar scan conversion function, plot layer mouse layer separation processing and fusion transparent processing of graphics and radar video, but also can complete display function of multi-size asynchronous window. This approach is limited by factors such as FPGA logic resources, timing control, etc., and is more used to handle lower resolution, fewer passes of video overlays and windowed displays. The multi-window fusion superposition display capability for the ultra-high resolution multi-source heterogeneous video access with more than 2 paths is worry, and the display quality and stability are seriously affected; the latter computing power is processed by the CPU, so that the computing power requirement is higher, more CPU resources are occupied, the processing power of the CPU on other functions is weakened, and the timeliness of multi-window display is reduced.
Disclosure of Invention
Compared with the traditional scheme, the invention can not only improve the number of video display paths and the display stability of the same screen of the display console, but also realize the purpose of meeting the network data sharing of the graphical interface of each display console under the condition of reducing one coding device.
In order to achieve the above purpose, the invention provides a multi-source heterogeneous video processing display device and method for a ship system, the whole thought is as follows:
The FPGA and ARM architecture mode is adopted, the parallel computing and pipeline processing capacity of the FPGA and the low-power consumption high-performance characteristics of the ARM architecture are fully utilized, the mixed superposition display function of multi-channel ultrahigh-resolution uncompressed FC video (photoelectric video and radar video) and compressed network video can be achieved, the display quality and stability of the FPGA and the high-real-time performance of video display can be guaranteed. Meanwhile, the invention can also synchronously output the codes which are fused and overlapped to the network interface to realize data sharing, or receive the network video stream information through the network interface to decode and display the network video stream information overlapped on the graphic interface. More specifically:
In one aspect, the present invention provides a multi-source heterogeneous video processing display device for a ship system, comprising: the system comprises a protocol unloading unit, a video frame format packaging unit, a radar scan conversion unit, a graphic distribution unit, a graphic layer separation unit, a primary fusion superposition unit, a secondary fusion superposition unit, a coding and decoding unit and a video comprehensive processing unit; the system comprises a protocol unloading unit, a video frame format packaging unit, a radar scan conversion unit, a primary fusion superposition unit and a layer separation unit, wherein the protocol unloading unit, the video frame format packaging unit, the radar scan conversion unit, the primary fusion superposition unit and the layer separation unit are arranged on an FPGA processing module;
The input end of the protocol unloading unit is connected with the optical fiber channel, and the output end of the protocol unloading unit is connected with the video frame format packaging unit and the radar scanning conversion unit; the method comprises the steps of analyzing an FC or a tera-megaprotocol to obtain radar video data and photoelectric video data;
the output end of the video frame format packaging unit is connected with the video comprehensive processing unit and is used for repackaging the optical television frequency data to obtain an original video layer; the output end of the radar scan conversion unit is connected with a primary fusion superposition unit and is used for performing scan conversion of multiple modes on radar video, and performing various window configuration, afterglow function and PPI wake function configuration to obtain a radar video layer;
The input end of the graphic distribution unit is connected with the graphic input interface, the output end of the graphic distribution unit is connected with the graphic layer separation unit and the secondary fusion superposition unit, and the graphic distribution unit is used for parallelizing the input video interface signals, converting one path of the video interface signals into RGB888 color space data and sending the RGB888 color space data to the graphic layer separation unit, converting one path of the video interface signals into YUV422 color space data and sending the YUV422 color space data to the secondary fusion superposition unit to serve as a graphic bottom layer; the output end of the layer separation unit is connected with a primary fusion superposition unit and is used for carrying out layer separation processing on RGB888 color space data to extract a mouse layer and a plotting layer, and meanwhile, sea chart layer data is discarded to obtain the mouse plotting layer;
the output end of the primary fusion superposition unit is connected with the secondary fusion superposition unit and is used for carrying out fusion superposition on the radar video layer and the mouse plotting layer so as to obtain a primary fusion superposition layer; the output end of the secondary fusion superposition unit is connected with the video comprehensive processing unit and is used for carrying out secondary fusion superposition processing on the graphics bottom layer and the primary fusion superposition layer so as to form a secondary fusion superposition layer;
The encoding and decoding unit is externally connected with the Ethernet channel, and is internally connected with the video comprehensive processing unit; the video code stream is used for receiving the video code stream sent by the Ethernet channel for decoding, and the decoded data is sent to the video comprehensive processing unit to form a compressed video layer; the system is also used for receiving the mixed window output video signal sent by the video comprehensive processing unit, encoding the mixed window output video signal and sending the encoded mixed window output video signal out through an Ethernet channel;
The video comprehensive processing unit is used for overlapping and windowing the secondary fusion overlapping layer, the original video layer and the compressed video layer to form a mixed windowing output video signal.
Further preferably, the fibre channel runs FC protocol based on 4.25G/8.5G rate, or a tera-mega network based on ethernet protocol; if the fiber channel runs the FC protocol, the protocol unloading unit supports the FC-AE-ASM protocol; if the fibre channel runs the tera-networking protocol, the protocol offload unit is compatible with the IEEE802.3ae/ap protocol.
Further preferably, the RGB color space data adopts RGB888 format, each color of red, green and blue has a color depth of 8 bits, and each pixel point is composed of 24bit data; the YUV422 color space data is obtained by performing color space conversion on RGB888 format and adopting an interpolation method.
Further preferably, the method for extracting the mouse layer and the plotting layer by the layer separation unit comprises the following steps:
According to the color key value set by the upper computer, the layer separation unit carries out traversal judgment on each RGB888 color value in RGB888 color space data, and screens out the color key value set by the upper computer to be reserved in a lookup table mode to obtain a mouse layer and a plotting layer; and performing data 0 filling processing on the color key values which are not in the lookup table, so that the color key values are represented as black pixels, thereby forming a mouse plotting layer.
Further preferably, the primary fusion superposition unit performs superposition processing of pixel values according to the principle that the information of a mouse plotting layer is at an upper layer and the information of a radar video layer is at a lower layer; if the pixel value of the mouse plotting layer is M and the pixel value of the radar video layer is N on the same pixel point, the pixel value on the fused pixel point is: p=λm+ (1- λ) N, 0+.λ+.1.
Further preferably, the secondary fusion superposition unit superposes the primary fusion superposition layer sent by the primary fusion superposition unit and the graph bottom layer sent by the graph distribution unit according to the sequence of the primary fusion superposition layer on the upper graph bottom layer and the lower graph bottom layer; the superposition rule is that all black pixels in the primary fusion superposition layer are replaced by pixel point values of the bottom layer of the graph in the secondary fusion superposition, and non-black pixel point values are kept unchanged, so that the secondary fusion superposition layer is formed.
Further preferably, the encoding and decoding unit is configured to receive network video stream information, decode up to 9 paths of 1920×1080@30hz resolution compressed code streams at the same time, and send decoded compressed video layer information to the video comprehensive processing unit for corresponding windowing display; simultaneously, the pictures output by the mixed window are received, coded and sent into an Ethernet channel; and the method is also used for configuring different coding parameters through an upper computer according to the requirement of a user, and forming a compressed video layer during decoding.
On the other hand, the invention provides a multi-source heterogeneous video processing and displaying method of a ship system, which comprises the following steps:
According to the configuration condition of the optical fiber channel, analyzing the FC or the tera-megaprotocols to obtain uncompressed original radar video data and photoelectric video data;
Repackaging the photoelectric video data to form an original video layer; the radar video is subjected to scanning conversion in multiple modes, and various window configurations, afterglow functions and PPI wake function configurations are carried out to form a radar video layer;
Parallelizing the input video interface signals, converting one path of the input video interface signals into RGB888 color space data, and converting the other path of the input video interface signals into YUV422 color space data to form a graph bottom layer;
Performing layer separation processing on RGB888 color space data, extracting a mouse layer and a plotting layer, and discarding sea chart layer data to form the mouse plotting layer;
carrying out first fusion superposition on data of a radar video layer and a mouse plotting layer, wherein the mouse plotting layer is arranged on the upper layer, and the radar video layer is arranged on the lower layer, so as to form a first fusion superposition layer;
the graphics bottom layer and the first fusion overlapping layer are subjected to fusion overlapping treatment again, the second fusion overlapping layer is arranged on the upper layer, the graphics bottom layer is arranged on the lower layer, and the second fusion overlapping layer is formed after fusion;
Meanwhile, compressed network video stream information is received through an Ethernet channel, and is correspondingly decoded to form a compressed video layer;
And overlapping and mixing windowing display are carried out on the secondary fusion overlapping layer, the original video layer and the compressed video layer, wherein the secondary fusion overlapping layer is always in full screen display, and the original video layer and the compressed video layer are subjected to full screen or non-full screen display according to window size and position information configuration according to an upper computer instruction.
And encoding the video information output by the mixed window to form a network video stream, and transmitting the network video stream through an Ethernet channel.
Further preferably, the first fusion superposition is performed according to the principle that the plotting layer information is at the upper layer and the radar layer information is at the lower layer; if the pixel value of the mouse plotting layer is M and the pixel value of the radar video layer is N on the same pixel point, the pixel value on the pixel point is: p=λm+ (1- λ) N, 0+.λ+.1.
Further preferably, the second fusion superposition is performed on the first fusion superposition layer and the graph bottom layer according to the sequence of the first fusion superposition layer on the upper graph bottom layer and the lower graph bottom layer; the superposition rule is that all black pixel values in the primary fusion superposition layer are replaced by pixel point values of the bottom layer of the graph in the secondary fusion superposition; while the non-black pixel values remain unchanged.
In general, the above technical solutions conceived by the present invention have the following compared with the prior art
The beneficial effects are that:
For the comprehensive display of multi-source heterogeneous videos, the traditional processing mode uniformly adopts FPGA processing or achieves the purpose through software programming by CPU calculation. The method is characterized in that fusion processing of various graphic layers such as a base map, a radar video, a plotting, a mouse, a photoelectric video and the like is completed through the FPGA, and the integrated layers are uniformly overlapped and then displayed on a display, and although an FPGA chip has multiple concurrent processing, a pipeline mechanism, flexibility, customization and the like, the defects of high compiling efficiency, resource shortage and the like are displayed along with the increase of integrated functions such as ultrahigh resolution, multipath video parallel processing and the like, and the function effect of final compiling of the FPGA is poor and the stability is insufficient. The latter computing power is processed by the CPU, so that the computing power requirement is higher, more CPU resources are occupied, the processing power of the CPU on other functions is weakened, and the timeliness of multi-window display is reduced.
The invention adopts the function separation processing technology of FPGA+ARM architecture, and the FPGA processes functions such as FC or tera-megaprotocol unloading, video frame encapsulation, radar scan conversion, layer separation function, one-time fusion superposition and the like; ARM completes the functions of secondary fusion superposition, video encoding and decoding, video comprehensive processing and the like. The FPGA and ARM architecture mode is adopted, the parallel computing and pipeline processing capacity of the FPGA and the low-power consumption high-performance characteristics of the ARM architecture are fully utilized, the mixed superposition display function of multi-channel ultrahigh-resolution uncompressed FC video (photoelectric video and radar video) and compressed network video can be achieved, the screen display quality and stability of a display console are improved, and the strong real-time performance of video display can be guaranteed.
The invention can synchronously encode and output the video information after the output and display of the mixed window to the network interface to realize data sharing, or receive the network video stream information through the network interface to decode and superimpose the network video stream information on the graphic interface for display. Compared with the traditional display console graphical interface sharing function solution, the scheme can reduce one encoding and decoding device to fulfill the aim of network data sharing.
Drawings
FIG. 1 is a block diagram of a multi-source heterogeneous video processing display device of a ship system provided by an embodiment of the present invention;
FIG. 2 is a schematic block diagram of a graphic allocation unit provided by an embodiment of the present invention;
FIG. 3 is a schematic block diagram of a primary fusion superposition processing provided by an embodiment of the present invention;
FIG. 4 is a schematic block diagram of a secondary fusion superposition processing provided by an embodiment of the present invention;
fig. 5 is a schematic block diagram of video integrated processing according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In one aspect, as shown in fig. 1, the present invention provides a multi-source heterogeneous video processing display device of a ship system, including: the system comprises a protocol unloading unit, a video frame format packaging unit, a radar scan conversion unit, a graphic distribution unit, a graphic layer separation unit, a primary fusion superposition unit, a secondary fusion superposition unit, a coding and decoding unit and a video comprehensive processing unit; the system comprises a protocol unloading unit, a video frame format packaging unit, a radar scan conversion unit, a primary fusion superposition unit and a layer separation unit, wherein the protocol unloading unit, the video frame format packaging unit, the radar scan conversion unit, the primary fusion superposition unit and the layer separation unit are arranged on an FPGA processing module; the device can receive uncompressed original radar video data or infrared/photoelectric video data from a fiber channel, and can finish the multi-layer fusion superposition processing of a mouse layer, a plotting layer, a graph layer and a radar video layer, the multi-information source mixed window superposition display of the graph bottom layer and a primary fusion superposition layer, an original video layer and a compressed video layer and the like according to local display requirements.
(1) Protocol offloading unit
The input of the protocol unloading unit is an externally accessed optical fiber channel; the fiber channel can be used as a carrier to run an FC protocol based on 4.25G/8.5G speed or a tera-mega network based on an Ethernet protocol, and the two protocols can be dynamically loaded on line according to the requirements of users; the protocol unloading unit is used as a functional unit for communicating with the outside and is used for operating and maintaining a protocol stack; if the FC protocol is operated, supporting the FC-AE-ASM protocol, and conforming to FC-FS and GJB 6411-2008 protocol specifications; if the protocol is operated in the ten-thousand-meganet protocol, the protocol is compatible with IEEE802.3ae/ap protocol, and accords with 10GBASE-SR protocol specification;
More specifically:
The optical fiber channel transmits a plurality of uncompressed original radar videos and a plurality of photoelectric videos, the radar videos and the photoelectric videos are distinguished through different 'groups', and the radar videos or the photoelectric videos are distinguished through different 'marking bits'; for the FC protocol, the radar video and the photoelectric video are distinguished by different SIDs; for the tera-mega network, different multicast addresses are adopted to distinguish radar video from photoelectric video; the protocol unloading unit analyzes the FC/tera-megaprotocol to obtain payload video data, distributes radar video data to the radar scanning conversion unit, and distributes photoelectric video data to the video frame format encapsulation unit; various video frame formats are described in table 1;
TABLE 1
(2) Video frame format encapsulation unit
The input end of the video frame format encapsulation unit is connected with the protocol unloading unit, and the output end is connected with the video processing unit; the method comprises the steps of receiving photoelectric video data transmitted by a protocol unloading unit, and repackaging the photoelectric video data to enable the frame format of the photoelectric video data to meet the BT1120 video frame format, namely, to meet the data structure specified by ITU-R (International telecommunication Union radio communication department) to form an original video layer; in this embodiment, for example, at 3840×2160 resolution, the encapsulated data format is shown in table 2:
TABLE 2
(3) Radar scan conversion unit
The input end of the radar scan conversion unit is connected with the protocol unloading unit, the output end of the radar scan conversion unit is connected with the primary fusion superposition unit, and the primary fusion superposition unit is used for performing scan conversion of multiple modes such as P display, B display and E display on a received radar video, performing various window configurations such as PPI windows, AR windows and small windows, and performing operations such as afterglow function and PPI wake function configuration, and forming a radar video layer at the moment;
More specifically: the radar scan conversion unit firstly completes scan conversion of radar video signals, and preprocesses the size, position information, afterglow, PPI wake, shielding relation and the like of a radar video window according to configuration information of an upper computer;
(4) Graphic distribution unit
The input end of the graphic distribution unit is connected with the graphic input interface, the output end of the graphic distribution unit is connected with the graphic layer separation unit and the secondary fusion superposition unit, and the graphic distribution unit is used for parallelizing the input differential serial video signals, converting one path of the differential serial video signals into RGB888 color values and transmitting the RGB888 color values to the graphic layer separation unit, converting the other path of the differential serial video signals into YUV422 color values and transmitting the YUV422 color values to the secondary fusion superposition unit;
More specifically, the data converted into RGB color space adopts RGB888 format, each color of red, green and blue has color depth of 8bit, and each pixel point is composed of 24bit data; the data converted into the BT1120 data format is YUV422 data format obtained by RGB888 through color space transformation and interpolation; in this embodiment, the graphics allocation unit needs to allocate different video data formats according to the actual requirements of each unit at the back end, and the data sent to the layer separation unit is in RGB888 format, each pixel value cannot have data loss, and can completely represent three color values of each pixel point; the data sent to the secondary fusion superposition unit is in YUV222 format, and meets the format requirement of the secondary fusion superposition unit on the received data. The two data formats of RGB888 and YUV222, namely the color characteristic values are different, but the same graphic signal is characterized; in this embodiment, the schematic block diagram of the graphic allocation unit is shown in fig. 2.
(5) Layer separation unit
The input end of the graphic layer separation unit is connected with the graphic distribution unit, the output end of the graphic layer separation unit is connected with the primary fusion superposition unit and is used for carrying out graphic layer separation processing on the graphic input by the graphic distribution unit, extracting a mouse layer and a plotting layer, and discarding sea graphic layer data at the same time, so that a mouse plotting layer is formed; more specifically, the layer separation unit receives RGB888 color space data, which contains multi-layer information such as a mouse layer, a plotting layer, a sea chart layer and the like, and each layer information uses different color space set identifiers, wherein the mouse layer and the plotting layer need to be extracted, and the sea chart layer data is discarded at the same time; according to the color key values set by the upper computer, the image layer separation unit carries out traversal judgment on each RGB888 color value, and screens out the color key values set by the upper computer to be reserved in a lookup table mode to obtain a mouse layer and a plotting layer; the color values not in the lookup table are subjected to data 0-filling processing so as to be represented as 'black' pixels, thereby forming a mouse plotting layer. The output of the processed image layer separation unit only keeps a mouse layer and a plotting layer, and the position information and the color value of the pixel points of the mouse layer and the plotting layer are not changed, so that the whole frame of image layer only keeps the mouse and the plotting information, the rest part of the image layer is represented as black data (black layer), and the image layer after frame processing is represented as only displaying the mouse and the plotting information on the whole black picture if being displayed by the display unit; for one line of data in the 3840×2160 resolution image employed in the present embodiment, the data before processing is shown in table 3;
TABLE 3 Table 3
Column number 1 2 3 4 5 6 3837 3838 3839 3840
Data RGB RGB Color key RGB RGB RGB RGB Color key RGB RGB
The processed data are shown in table 4;
TABLE 4 Table 4
Column number 1 2 3 4 5 6 3837 3838 3839 3840
Data 0 0 Color key 0 0 0 0 Color key 0 0
(6) One-time fusion superposition unit
The input of the primary fusion superposition unit is connected with the radar scan conversion unit and the image layer separation unit, and the output end of the primary fusion superposition unit is connected with the secondary fusion superposition unit and is used for superposition and fusion of data of a radar video layer and a mouse plotting layer;
More specifically, the primary fusion superposition unit receives the radar video layer sent by the radar scan conversion unit and the mouse plotting layer data sent by the layer separation unit; the primary fusion superposition unit performs superposition processing according to the principle that the information of the upper layer is plotted by a mouse and the information of the radar video layer is arranged on the lower layer; because the resolution of the radar video layer is consistent with that of the mouse plotting layer, each pixel value of the radar video layer and the mouse plotting layer needs to be correspondingly processed in the superposition process;
More specifically, the primary fusion superposition unit receives the radar video layer transmitted by the radar scan conversion unit and the mouse plotting layer transmitted by the image layer separation unit respectively, and performs data traversal judgment processing on each pixel of the two layers of images. On the pixel point at the same position, if the value of the mouse plotting layer is M and the value of the radar video layer is N, the value P of the pixel is calculated according to the formula: p=λm+ (1- λ) N, 0+.λ+.1. The processed complete data frame is called a primary fusion superposition layer; the value of lambda can be configured according to the instruction of the upper computer so as to achieve different effects after data fusion; in this embodiment, a primary fusion superposition processing flow is shown in fig. 3;
if the radar video layer data is expressed as RD, the primary fusion overlay data after the superposition fusion processing can be expressed as shown in the table 5;
TABLE 5
Column number 1 2 3 4 5 6 3837 3838 3839 3840
Data 0 0 Color key 0 0 RD 0 Color key RD 0
(7) Secondary fusion superposition unit
The input end of the secondary fusion superposition unit is connected with the primary fusion superposition unit and the graphic distribution unit, and the output end of the secondary fusion superposition unit is connected with the video comprehensive processing unit and is used for carrying out fusion superposition processing on the graphic bottom layer and the primary fusion superposition layer again; more specifically, the image layer sent by the primary fusion superposition unit belongs to a high priority and needs to be arranged on the upper layer of the superposition layer, and video information contained by the primary fusion superposition layer comprises a mouse layer, a plotting layer, a radar layer and a black layer which are arranged according to the layers from top to bottom; the bottom layer of the graph sent by the graph distribution unit is used as the bottommost layer of the video output graph, so that the primary fusion superposition is laminated on the bottom layer of the graph during the secondary fusion superposition; because the black layer in the once-fused superimposed layer has obvious characteristics, the black layer is entirely composed of data 0, all pixel values of the black layer can be replaced by pixel values of the bottom layer of the graph according to the characteristic values during the superposition processing, the pixel values of the non-black layer remain unchanged, the fused data can be expressed as that all layers except the black layer in the once-fused superimposed layer are arranged on the upper layer, and the black layer is replaced by the bottom layer of the graph and then arranged on the bottom layer; in this embodiment, the flow of the secondary fusion superposition processing is shown in fig. 4;
In the present embodiment, if the graphics underlying data is represented as PD, the data after the secondary fusion superimposition processing may be represented as shown in table 6;
TABLE 6
Column number 1 2 3 4 5 6 3837 3838 3839 3840
Data PD PD Color key PD PD RD PD Color key RD PD
Further preferably, when the video integrated processing unit displays, the secondary fusion superposition layer always displays the graphic information such as a mouse, a plotting, a radar video, a chart and the like in a full screen manner; the original video layer and the compressed video layer are required to be subjected to full-screen or non-full-screen display according to the configuration of window size and position information according to the instruction of the upper computer.
(8) Encoding and decoding unit
The encoding and decoding unit is externally connected with the Ethernet interface, and is internally connected with the video comprehensive processing unit. The method is mainly responsible for receiving a video code stream sent by the Ethernet for decoding, and then sending the decoded data to a video comprehensive processing unit; meanwhile, the method also receives the information of the superimposed display picture sent by the video comprehensive processing unit, encodes the information and sends the information out through a network, and a compressed video layer is formed at the moment;
The encoding and decoding unit receives network video stream information, can decode up to 9 paths of 1920 x 1080@30HZ resolution compressed code streams at the same time, and sends decoded compressed video layer information to the video comprehensive processing unit for corresponding windowing display; meanwhile, the encoding and decoding unit can also receive the pictures output by the mixed window to carry out encoding processing and then send the pictures into the Ethernet channel. Meanwhile, the encoding and decoding unit can configure different encoding parameters, such as H.264/H.265 algorithm, code rate, frame rate and other information, through the upper computer according to the user requirements.
(9) Video integrated processing unit
The input end of the video comprehensive processing unit is connected with the secondary fusion superposition unit, the video frame format encapsulation unit and the encoding and decoding unit, and the output end of the video comprehensive processing unit is connected with the display unit. The method comprises the steps of performing video superposition processing on a received secondary fusion superposition layer, an original video layer and a compressed video layer, and then performing mixed window display; more specifically, the video comprehensive processing unit outputs the secondary fusion superimposed layer as a graphics bottom layer, and the resolution and the frame rate of the output are consistent with those of the input secondary fusion superimposed layer; when the video integrated processing unit displays, the secondary superimposed layer always displays full screen as the secondary superimposed layer contains graphic information such as a mouse, a plot, a radar video, a chart and the like; the original video layer and the compressed video layer are required to be subjected to full-screen or non-full-screen display according to the configuration of window size and position information according to the instruction of the upper computer.
The original video layer and the compressed video layer have independent video information, can adjust the information such as the position, the size and the superposition sequence of the respective windows according to the needs, superimpose the configured windowed pictures on the graphic bottom layer and then send the superimposed windowed pictures to the display terminal for display; in order to adjust the display effect, parameters such as brightness, chromaticity, contrast and the like of the output video can be adjusted; in this embodiment, a schematic block diagram of the video integrated processing unit is shown in fig. 5.
On the other hand, the invention provides a ship system multi-source heterogeneous video processing display method corresponding to the ship system multi-source heterogeneous video processing display device, which comprises the following steps:
S1: the protocol unloading unit is used for solving and separating out uncompressed original video payload data sent by the optical fiber channel according to a configured protocol, sending photoelectric video data to the video frame format packaging unit and sending radar video data to the radar scanning conversion unit;
S2: encapsulating the photoelectric video data in a BT1120 video frame format by adopting a video frame format encapsulating unit to form an original video layer;
S3: the radar scan conversion unit performs scan conversion of multiple modes such as P display, B display and E display on the received radar video, performs position size configuration on various windows such as PPI window and AR window, and performs afterglow function and PPI wake function configuration to form a radar video layer;
S4: the graphic distribution unit performs color space conversion on the input graphic signals, one path of the graphic signals is converted into an RGB888 mode, the other path of the graphic signals is converted into a YUV422 mode, and a graphic bottom layer is formed at the moment;
S5: the layer separation unit extracts a mouse layer and a plotting layer, and discards sea chart layer data to form the mouse plotting layer;
S6: the primary fusion superposition unit performs superposition fusion processing on the two video layer information sent by the S3 and the S5 to form a primary fusion superposition layer;
s7: the secondary fusion superposition unit performs superposition fusion processing on the two video layer information sent by the S6 and the S4 again to form a secondary fusion superposition layer;
S8: the encoding and decoding unit decodes the received Ethernet channel video stream data to form compressed video layer information and sends the compressed video layer information to the video comprehensive processing unit;
s9: the video comprehensive processing unit performs video superposition processing on the received secondary fusion superposition layer, the original video layer and the compressed video layer, and then performs mixed window output display, so that the window position, the size, the superposition sequence and the like can be adjusted according to the needs. In order to adjust the display effect, parameters such as brightness, chromaticity, contrast and the like of the output video can be adjusted;
S10: the encoding and decoding unit receives the output information of the mixed window, encodes the output information and sends the encoded output information to the Ethernet channel. Meanwhile, the encoding and decoding unit can configure different encoding parameters, such as H.264/H.265 algorithm, code rate, frame rate and other information, through the upper computer according to the user requirements.
It will be readily appreciated by those skilled in the art that the foregoing description is merely a preferred embodiment of the invention and is not intended to limit the invention, but any modifications, equivalents, improvements or alternatives falling within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (9)

1. A multi-source heterogeneous video processing display device for a ship system, comprising: the system comprises a protocol unloading unit, a video frame format packaging unit, a radar scan conversion unit, a graphic distribution unit, a graphic layer separation unit, a primary fusion superposition unit, a secondary fusion superposition unit, a coding and decoding unit and a video comprehensive processing unit; the system comprises a protocol unloading unit, a video frame format packaging unit, a radar scan conversion unit, a primary fusion superposition unit and a layer separation unit, wherein the protocol unloading unit, the video frame format packaging unit, the radar scan conversion unit, the primary fusion superposition unit and the layer separation unit are arranged on an FPGA processing module;
The input end of the protocol unloading unit is connected with the optical fiber channel, and the output end of the protocol unloading unit is connected with the video frame format packaging unit and the radar scanning conversion unit; the method comprises the steps of analyzing an FC or a tera-megaprotocol to obtain radar video data and photoelectric video data; the output end of the video frame format packaging unit is connected with the video comprehensive processing unit and is used for repackaging the optical television frequency data to obtain an original video layer; the output end of the radar scan conversion unit is connected with a primary fusion superposition unit and is used for performing scan conversion of multiple modes on radar video, and performing various window configuration, afterglow function and PPI wake function configuration to obtain a radar video layer;
The input end of the graphic distribution unit is connected with the graphic input interface, the output end of the graphic distribution unit is connected with the graphic layer separation unit and the secondary fusion superposition unit, and the graphic distribution unit is used for parallelizing the input video interface signals, converting one path of the video interface signals into RGB888 color space data and sending the RGB888 color space data to the graphic layer separation unit, converting one path of the video interface signals into YUV422 color space data and sending the YUV422 color space data to the secondary fusion superposition unit to serve as a graphic bottom layer; the output end of the layer separation unit is connected with a primary fusion superposition unit and is used for carrying out layer separation processing on RGB888 color space data to extract a mouse layer and a plotting layer, and meanwhile, sea chart layer data is discarded to obtain the mouse plotting layer; the output end of the primary fusion superposition unit is connected with the secondary fusion superposition unit and is used for carrying out fusion superposition on the radar video layer and the mouse plotting layer so as to obtain a primary fusion superposition layer;
The output end of the secondary fusion superposition unit is connected with the video comprehensive processing unit and is used for carrying out secondary fusion superposition processing on the graphics bottom layer and the primary fusion superposition layer so as to form a secondary fusion superposition layer; the encoding and decoding unit is externally connected with the Ethernet channel, and is internally connected with the video comprehensive processing unit; the video code stream is used for receiving the video code stream sent by the Ethernet channel for decoding, and the decoded data is sent to the video comprehensive processing unit to form a compressed video layer; the system is also used for receiving the mixed window output video signal sent by the video comprehensive processing unit, encoding the mixed window output video signal and sending the encoded mixed window output video signal out through an Ethernet channel; the video comprehensive processing unit is used for superposing and windowing display the secondary fusion superposition layer, the original video layer and the compressed video layer to form a mixed windowing output video signal;
The method for extracting the mouse layer and the plotting layer by the layer separation unit comprises the following steps:
According to the color key value set by the upper computer, the layer separation unit carries out traversal judgment on each RGB888 color value in RGB888 color space data, and screens out the color key value set by the upper computer to be reserved in a lookup table mode to obtain a mouse layer and a plotting layer; and performing data 0 filling processing on the color key values which are not in the lookup table, so that the color key values are represented as black pixels, thereby forming a mouse plotting layer.
2. The ship system multi-source heterogeneous video processing display device of claim 1, wherein the fiber channel runs FC protocol based on 4.25G/8.5G rate or ten meganetwork based on ethernet protocol; if the fiber channel runs the FC protocol, the protocol unloading unit supports the FC-AE-ASM protocol; if the fibre channel runs the tera-networking protocol, the protocol offload unit is compatible with the IEEE802.3ae/ap protocol.
3. The multi-source heterogeneous video processing display device of the ship system according to claim 1 or 2, wherein the RGB color space data adopts RGB888 format, each color of red, green and blue has a color depth of 8 bits, and each pixel point is composed of 24bit data; the YUV422 color space data is obtained by performing color space conversion on RGB888 format and adopting an interpolation method.
4. The multi-source heterogeneous video processing display device of the ship system according to claim 3, wherein the primary fusion superposition unit performs superposition processing of pixel values according to the principle that the information of a mouse plotting layer is at an upper layer and the information of a radar video layer is at a lower layer; if the pixel value of the mouse plotting layer is M and the pixel value of the radar video layer is N on the same pixel point, the pixel value on the fused pixel point is: p=λm+ (1- λ) N, 0+.λ+.1.
5. The multi-source heterogeneous video processing display device of the ship system according to claim 4, wherein the secondary fusion superposition unit superposes the primary fusion superposition layer sent by the primary fusion superposition unit and the graph bottom layer sent by the graph distribution unit according to the sequence of the primary fusion superposition layer on the upper graph bottom layer and the lower graph bottom layer; the superposition rule is that all black pixels in the primary fusion superposition layer are replaced by pixel point values of the bottom layer of the graph in the secondary fusion superposition, and non-black pixel point values are kept unchanged, so that the secondary fusion superposition layer is formed.
6. The multi-source heterogeneous video processing display device of the ship system according to claim 1 or 2, wherein the encoding and decoding unit is used for receiving network video stream information, simultaneously decoding up to 9 paths of 1920 x 1080@30hz resolution compressed code streams, and sending the decoded compressed video layer information to the video comprehensive processing unit for corresponding windowing display; simultaneously, the pictures output by the mixed window are received, coded and sent into an Ethernet channel; and the method is also used for configuring different coding parameters through an upper computer according to the requirement of a user, and forming a compressed video layer during decoding.
7. A multi-source heterogeneous video processing and displaying method of a ship system is characterized by comprising the following steps:
According to the configuration condition of the optical fiber channel, analyzing the FC or the tera-megaprotocols to obtain uncompressed original radar video data and photoelectric video data;
Repackaging the photoelectric video data to form an original video layer; the radar video is subjected to scanning conversion in multiple modes, and various window configurations, afterglow functions and PPI wake function configurations are carried out to form a radar video layer;
Parallelizing the input video interface signals, converting one path of the input video interface signals into RGB888 color space data, and converting the other path of the input video interface signals into YUV422 color space data to form a graph bottom layer;
Performing layer separation processing on RGB888 color space data, extracting a mouse layer and a plotting layer, and discarding sea chart layer data to form the mouse plotting layer;
carrying out first fusion superposition on data of a radar video layer and a mouse plotting layer, wherein the mouse plotting layer is arranged on the upper layer, and the radar video layer is arranged on the lower layer, so as to form a first fusion superposition layer;
the graphics bottom layer and the first fusion overlapping layer are subjected to fusion overlapping treatment again, the second fusion overlapping layer is arranged on the upper layer, the graphics bottom layer is arranged on the lower layer, and the second fusion overlapping layer is formed after fusion;
Meanwhile, compressed network video stream information is received through an Ethernet channel, and is correspondingly decoded to form a compressed video layer;
Overlapping and mixing windowing display are carried out on the secondary fusion overlapping layer, the original video layer and the compressed video layer, wherein the secondary fusion overlapping layer is always displayed in a full screen mode, and the original video layer and the compressed video layer are displayed in a full screen mode or not according to configuration of window size and position information according to an upper computer instruction;
encoding the video information output by the mixed window to form a network video stream, and transmitting the network video stream through an Ethernet channel;
the method for extracting the mouse layer and the plotting layer by the layer separation unit comprises the following steps:
According to the color key value set by the upper computer, the layer separation unit carries out traversal judgment on each RGB888 color value in RGB888 color space data, and screens out the color key value set by the upper computer to be reserved in a lookup table mode to obtain a mouse layer and a plotting layer; and performing data 0 filling processing on the color key values which are not in the lookup table, so that the color key values are represented as black pixels, thereby forming a mouse plotting layer.
8. The method for processing and displaying the multi-source heterogeneous video of the ship system according to claim 7, wherein the first fusion superposition is performed according to the principle that the information of a plotting layer is at an upper layer and the information of a radar layer is at a lower layer; if the pixel value of the mouse plotting layer is M and the pixel value of the radar video layer is N on the same pixel point, the pixel value on the pixel point is: p=λm+ (1- λ) N, 0+.λ+.1.
9. The method for processing and displaying the multi-source heterogeneous video of the ship system according to claim 7, wherein the second fusion superposition is performed on the first fusion superposition layer and the graph bottom layer according to the sequence of the first fusion superposition layer on the upper graph bottom layer and the lower graph bottom layer; the superposition rule is that all black pixel values in the primary fusion superposition layer are replaced by pixel point values of the bottom layer of the graph in the secondary fusion superposition; while the non-black pixel values remain unchanged.
CN202211377465.XA 2022-11-04 2022-11-04 Multi-source heterogeneous video processing display device and method for ship system Active CN115733940B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211377465.XA CN115733940B (en) 2022-11-04 2022-11-04 Multi-source heterogeneous video processing display device and method for ship system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211377465.XA CN115733940B (en) 2022-11-04 2022-11-04 Multi-source heterogeneous video processing display device and method for ship system

Publications (2)

Publication Number Publication Date
CN115733940A CN115733940A (en) 2023-03-03
CN115733940B true CN115733940B (en) 2024-10-22

Family

ID=85294705

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211377465.XA Active CN115733940B (en) 2022-11-04 2022-11-04 Multi-source heterogeneous video processing display device and method for ship system

Country Status (1)

Country Link
CN (1) CN115733940B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117310679B (en) * 2023-11-28 2024-02-20 中国人民解放军空军工程大学 Gridding sensing system and method for detecting low-low aircraft
CN117914953B (en) * 2024-03-20 2024-06-07 中国船级社 Ship data processing method, device and equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000028518A2 (en) * 1998-11-09 2000-05-18 Broadcom Corporation Graphics display system
CN111510657A (en) * 2019-12-18 2020-08-07 中国船舶重工集团公司第七0九研究所 Multi-path radar and photoelectric video comprehensive display method and system based on FPGA

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
MY127855A (en) * 1999-06-09 2006-12-29 Mediatek Inc Corp Integrated video processing system having multiple video sources and implementing picture-in-picture with on-screen display graphics
JP2002344898A (en) * 2001-05-17 2002-11-29 Pioneer Electronic Corp Video display device, audio adjusting device, video and audio output device, and method for synchronizing video and audio
KR100322485B1 (en) * 2001-07-05 2002-02-07 이동욱 Multi-Channel Video Encoding apparatus and method thereof
JP4115879B2 (en) * 2003-05-09 2008-07-09 三菱電機株式会社 Image display device
CN101335857A (en) * 2007-06-25 2008-12-31 刘少龙 Method and apparatus for asynchronous compression and mixed display of video graphics-text
WO2012131701A2 (en) * 2011-03-11 2012-10-04 The Tata Power Company Ltd. Fpga system for processing radar based signals for aerial view display
CN105491272B (en) * 2015-12-02 2018-07-31 南京理工大学 Visible light based on FPGA and ARM dual processors and infrared image fusing device
CN108055478A (en) * 2017-12-18 2018-05-18 天津津航计算技术研究所 A kind of multi-channel video superposed transmission method based on FC-AV agreements
CN109743515B (en) * 2018-11-27 2021-09-03 中国船舶重工集团公司第七0九研究所 Asynchronous video fusion and superposition system and method based on soft core platform
CN110290336A (en) * 2019-07-16 2019-09-27 深圳市殷泰禾技术有限公司 A kind of HD video multimedia messages superimposer
CN110363676A (en) * 2019-07-19 2019-10-22 中交铁道设计研究总院有限公司 Railway transportation coal dust suppression intellectual monitoring analysis method based on big data
CN111935531A (en) * 2020-08-04 2020-11-13 天津七所精密机电技术有限公司 Integrated display system graph processing method based on embedded platform
CN112367509B (en) * 2020-11-10 2022-10-14 北京计算机技术及应用研究所 Method for realizing domestic four-way super-definition image comprehensive display device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000028518A2 (en) * 1998-11-09 2000-05-18 Broadcom Corporation Graphics display system
CN111510657A (en) * 2019-12-18 2020-08-07 中国船舶重工集团公司第七0九研究所 Multi-path radar and photoelectric video comprehensive display method and system based on FPGA

Also Published As

Publication number Publication date
CN115733940A (en) 2023-03-03

Similar Documents

Publication Publication Date Title
CN115733940B (en) Multi-source heterogeneous video processing display device and method for ship system
US11183099B1 (en) System and method for a six-primary wide gamut color system
US11600214B2 (en) System and method for a six-primary wide gamut color system
US11043157B2 (en) System and method for a six-primary wide gamut color system
US20210035487A1 (en) System and method for a multi-primary wide gamut color system
US20230326386A1 (en) System and method for a multi-primary wide gamut color system
CN102119532B (en) Color gamut scalability techniques
AU2019366364A1 (en) System and method for a six-primary wide gamut color system
EP3383017A1 (en) Method and device for color gamut mapping
US20120307151A1 (en) Embedding ARGB Data in a RGB Stream
CN118540494A (en) Image data processing device and method
WO2012109582A1 (en) System and method for multistage optimized jpeg output
JP2005033741A (en) Television character information display device, and television character information display method
CN112017587B (en) Display system, display correction method and display correction device
CN102710935A (en) Method for screen transmission between computer and mobile equipment through incremental mixed compressed encoding
US10873342B2 (en) Method and device for sending and receiving data and data transmission system
KR20170039069A (en) independent multi-source display device and method for displaying content on independent multi-source display device
CA2924461A1 (en) System and method for reducing visible artifacts in the display of compressed and decompressed digital images and video
CN117528098B (en) Coding and decoding system, method and equipment for improving image quality based on deep compressed code stream
US20240022750A1 (en) Integrated chip including interface, operating method thereof, and electronic device including integrated chip
CN115695700A (en) Device and method for realizing multi-path PAL system picture display based on FPGA
KR20230097030A (en) Systems and methods for multi-primary wide gamut color systems
CN116489132A (en) Virtual desktop data transmission method, server, client and storage medium
CN117061686A (en) Lossless conversion and processing method for 8K video stream signal
WO2022086629A1 (en) System and method for a multi-primary wide gamut color system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant