CN106600572A - Adaptive low-illumination visible image and infrared image fusion method - Google Patents
Adaptive low-illumination visible image and infrared image fusion method Download PDFInfo
- Publication number
- CN106600572A CN106600572A CN201611142487.2A CN201611142487A CN106600572A CN 106600572 A CN106600572 A CN 106600572A CN 201611142487 A CN201611142487 A CN 201611142487A CN 106600572 A CN106600572 A CN 106600572A
- Authority
- CN
- China
- Prior art keywords
- image
- low
- fusion
- visible light
- frequency component
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000005286 illumination Methods 0.000 title claims abstract description 42
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 20
- 230000003044 adaptive effect Effects 0.000 title claims abstract description 11
- 230000004927 fusion Effects 0.000 claims abstract description 69
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 26
- 230000009466 transformation Effects 0.000 claims abstract description 22
- 238000000034 method Methods 0.000 claims abstract description 15
- 238000013528 artificial neural network Methods 0.000 claims description 24
- 230000008878 coupling Effects 0.000 claims description 21
- 238000010168 coupling process Methods 0.000 claims description 21
- 238000005859 coupling reaction Methods 0.000 claims description 21
- 210000002569 neuron Anatomy 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 7
- 230000015572 biosynthetic process Effects 0.000 claims description 6
- 238000003786 synthesis reaction Methods 0.000 claims description 6
- 230000002708 enhancing effect Effects 0.000 claims description 4
- 238000001228 spectrum Methods 0.000 claims description 4
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims description 3
- 230000007123 defense Effects 0.000 abstract description 4
- 238000012544 monitoring process Methods 0.000 abstract description 4
- 238000003745 diagnosis Methods 0.000 abstract description 3
- 239000000284 extract Substances 0.000 abstract description 2
- 238000012545 processing Methods 0.000 abstract description 2
- 238000001514 detection method Methods 0.000 abstract 1
- 238000007781 pre-processing Methods 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 230000005855 radiation Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The invention relates to an adaptive low-illumination visible image and infrared image fusion method, and belongs to the field of digital image processing. The method comprises the steps: image preprocessing, NSCT transformation, frequency domain coefficient fusion and an NSCT inversion. The method mainly achieves the multi-scale decomposition of a bright image and an infrared visible image, extracts high-frequency components and low-frequency components, and selects different fusion rules to carry out the frequency domain fusion according to the characteristics of the high-frequency components and low-frequency components. The NSCT inversion part is mainly used for carrying out the multi-scale inversion of the obtained fusion high-frequency and low-frequency components, obtaining the gray scale image after fusion, then carrying out the weighting of the gray scale image after fusion and an original color visible image, and obtaining a color fusion image. The method can effectively keep more detail information of the original image, can improve the contrast and resolution of the fusion image, and can be widely used in the fields of intelligent traffic, video monitoring, medical diagnosis, target detection and national defense security.
Description
Technical Field
The present invention belongs to the field of digital image processing.
Background
With the rapid development of sensor technology, the research of image fusion technology gradually becomes the hot field of research today. Different kinds of sensors, the image information of the captured scene is usually very different. In addition, the amount of information obtained by a single sensor is limited, and the application requirements of people are difficult to meet. The image fusion technology is to combine the image information obtained by two or more sensors to generate a new image. The fused new image has the advantages of rich detail information, high contrast and the like, is widely applied to various fields of medical diagnosis, target recognition and tracking, video monitoring, intelligent transportation, national defense safety and the like, and has high social application value. The problems of determination of fusion criteria, improvement of quality of fused images, registration of original images and the like are still technical difficulties in the field of image fusion.
The image fusion technology is mainly classified into a fusion method based on pixel point analysis, a fusion method based on feature calculation, and a fusion method based on decision analysis. The fusion method based on pixel point analysis directly analyzes and comprehensively processes the corresponding pixel points of the original image to obtain the fusion pixel value, and is the most widely and simply applied method in the field of image fusion. The image fusion method based on the traditional wavelet transform is poor in directional selection and easy to introduce blocking effect, so that the fused image is low in contrast and not beneficial to observation and judgment of human eyes. The image fusion method based on the traditional Contourlet transformation has the problems of small brightness dynamic range, missing detail information and the like of the fused image due to improper selection of the fusion criterion, and the function of a visual system is seriously influenced.
In recent years, the interest in the research of image fusion technology has increased. The low-illumination visible light image is often low in brightness and unclear, and is inconvenient for observation and identification of the target. At present, a unified mathematical model is not formed yet in an image fusion method based on pixel point analysis, and the quality of a fused image needs to be further improved. The traditional image fusion method has the problems of low contrast, small brightness dynamic range, less retained detail information and the like, and most of the methods only aim at fusion of gray level images and are difficult to be widely applied to various fields.
Disclosure of Invention
The invention provides a self-adaptive low-illumination visible light image and infrared image fusion method, which aims to solve the problems of low contrast and small brightness dynamic range of a fusion image.
The technical scheme adopted by the invention is that the method comprises the following steps:
(1) acquiring an original low-illumination visible light image and an infrared image:
under the condition of low illumination, an infrared camera and a visible light camera on a holder are used for respectively collecting an original color visible light image and an original infrared image, and the resolution ratios of the original color visible light image and the original color visible light image are 640 x 480;
(2) and matching the obtained original infrared image and the obtained color visible light image with scale-invariant feature points:
according to the scale invariant features of the original infrared image and the color visible light image, image registration is carried out, and images at any same position in a scene are ensured to be at the same position in the acquired image;
(3) extracting a brightness image of the registered color visible light image by using HIS transformation; according to the characteristics of the infrared sensor, enhancing the contrast of the infrared image after registration:
after registration, the color visible light image is a three-dimensional image, the infrared image is a two-dimensional image, and the image space dimensions are different, so that the brightness image I of the color visible light image is extracted by adopting color space HIS transformationvisibleThe calculation formula is shown as (1):
wherein, Ir、Ig、IbPixel values of RGB three channels respectively expressed as a low-illumination visible light image;
the pixel brightness value of the infrared image is inverted, so that the contrast of the infrared image is enhanced;
IIR=L-Iir(2)
wherein, IIRFor the infrared image after reflection removal, IirFor the registered infrared image, L is the gray level of the infrared image, and when the pixel of the infrared image is 8bit, L is 2^8 ^ 256;
(4) by using NSCT conversion of luminance image IvisibleAnd an infrared image IIRPerforming multi-scale decomposition to obtain corresponding low-frequency components and high-frequency components respectively:
the device comprises two parts: the system comprises a non-subsampling pyramid filter and a non-subsampling direction filter bank, wherein the non-subsampling pyramid filter bank is used for realizing a multi-scale decomposition process, and the non-subsampling direction filter bank is used for realizing the decomposition of a frequency domain direction;
in the non-subsampled pyramid filter bank, the kth-level non-subsampled pyramid filter can be obtained by the following formula:
the non-subsampled pyramid filter also needs to satisfy the Bezout identity:
H0(z)G0(z)+H1(z)G1(z)=1 (4)
wherein H0(z)、G0(z) low pass decomposition and synthesis filters that are non-subsampled pyramid filters; h1(z)、G1(z) high pass decomposition and synthesis filters that are non-subsampled pyramid filters:
the non-subsampling direction filter bank consists of a single sector filter, and performs upsampling operation on the non-subsampling direction filter, so that the frequency spectrum aliasing phenomenon can be effectively eliminated;
according to the non-subsampling pyramid and the non-subsampling direction filter bank, multi-scale transformation is carried out on the brightness image and the infrared image, and corresponding high-frequency components are extractedAnd low frequency componentsWherein J ≧ d ≧ 1, J denotes the imageThe total decomposition layer number is larger, the longer the algorithm operation time is, and J is 2, k is [2,16 ] to ensure the multi-scale and multi-direction characteristics of image decomposition];
(5) Using different fusion criteria for high-frequency componentsAnd low frequency componentsRespectively fusing to obtain fused low-frequency components and fused high-frequency components;
the low-frequency component reflects the general picture information of the original image, the high-frequency component represents the texture detail information of the image, and different criteria are adopted for fusion respectively according to the characteristics of the high-frequency component and the low-frequency component to obtain the fused high-frequency component and the fused low-frequency component;
1) low frequency component fusion
And (4) for the corresponding low-frequency component obtained in the step (4), obtaining a fused low-frequency component by adopting an adaptive threshold fusion criterion:
wherein,for the fused low-frequency component, ITHIs a brightness threshold value, wthAs the weight coefficient,corresponding low-frequency components of the brightness image and the infrared image respectively;
adopting a fusion criterion of self-adaptive threshold values for fusing low-frequency components of the brightness image and the infrared image, selecting the low-frequency component of the brightness image as the fused low-frequency component when the low-frequency component of the brightness image is greater than the threshold value, carrying out arithmetic mean on the low-frequency components of the brightness image and the infrared image when the low-frequency component of the brightness image is less than or equal to the threshold value, and calculating the fused low-frequency component;
2) high frequency component fusion
And (3) for the corresponding high-frequency component obtained in the step (4), obtaining a fused high-frequency component by adopting a fusion criterion of selecting the ignition times of the pulse coupling neural network, wherein the pulse coupling neural network is formed by connecting a feedback network and a plurality of neurons, each neuron mainly comprises a receiving part, a modulating part and a pulse generator, and the high-frequency component coefficient is the feedback input of the triggering pulse coupling neural network, and is shown as a formula (7):
wherein,is the output of the feedback section and is,is an input signal, (i, j) represents a pixel position, k represents the direction number of the d-th layer high-frequency component, and n' is the current iteration number;
the connection part of the pulse coupled neural network can be obtained from equation (8).
WhereinIs the output of the connecting portion, m and n are the ranges of connecting neurons, VLTo normalize the coefficients, Wij,mnWeight coefficients for other neurons connected;
the internal operation process of the pulse coupling neural network is obtained by calculation of a formula (9) and a formula (10);
wherein,is the internal state, β, aθAnd VθIn order to be a fixed factor,is a threshold value;
each iteration is as follows:
wherein X represents an original luminance image or an infrared image; n is the total iteration number;represents the total number of ignitions;
and selecting the corresponding high-frequency component with large ignition frequency as the fused high-frequency component by comparing the ignition frequency:
for the high-frequency components of the brightness image and the infrared image, fusing by adopting a criterion of taking a large number of ignition times based on a pulse coupling neural network, triggering the pulse coupling neural network by using a formula (7) according to the corresponding high-frequency components, calculating according to formulas (9) to (12) to obtain the corresponding ignition times, and selecting the high-frequency components corresponding to the large ignition times as the fused high-frequency components by comparing the ignition times by using a formula (13);
(6) and carrying out NSCT inverse transformation on the fused low-frequency component and high-frequency component to obtain a fused gray image IGray-F;
For the gray level fusion image obtained by carrying out multi-scale inverse transformation on the low-illumination gray level visible light image and the infrared image serving as the original image, the infrared image information of a low-illumination area is kept, meanwhile, the texture information of the low-illumination gray level image is kept, the contrast and the definition of the original visible light gray level image are improved, and people or other target objects in a scene can be judged and identified more accurately by human eyes;
(7) and (4) carrying out weight summation on the fused gray-scale image obtained in the step (6) and the registered color visible light image to obtain a final color fused image IF;
Wherein C represents R, G, B color channels;original visible light pixel values representing the C color channel;the C-channel final fused image pixel value is represented, w is a weighting coefficient, and w is generally 0.5.
Under the condition of low illumination, the average brightness value of the color visible light image is lower, in order to ensure that more color information of the visible light image is reserved for the color fusion image, the gray level fusion image obtained in the step 6 and the original color visible light image are subjected to weight summation through a formula (14), and a final color fusion image is obtained.
The invention has the following beneficial effects:
(1) the invention relates to a method for improving the contrast and definition of a new image by performing frequency domain fusion on a low-illumination visible light image and an infrared image with the resolution of 640 x 480.
(2) The image obtained by the visible light sensor generally has rich color information and detail information, but under low-light conditions or other severe weather (such as haze and dust weather), a lot of scene information can be lost, and the all-weather work cannot be carried out. The infrared sensor can capture a hidden hot target through target heat radiation, is less influenced by scene brightness and severe weather, but has low contrast of a general infrared image and no color information, and is difficult to completely meet actual engineering application due to a single sensor. The invention adopts NSCT transformation, firstly carries out multi-scale decomposition on an original image, and extracts corresponding high-frequency components and low-frequency components. The low frequency components are then fused by a fusion criterion that employs an adaptive threshold. And fusing the high-frequency components by utilizing a fusion criterion based on the fact that the absolute value of the ignition times of the pulse coupling neural network is large. And then, a new fusion image is obtained by NSCT inverse transformation and weight summation, so that the blocking effect is effectively removed, the detail information of a scene target is highlighted, more detail information and color information are reserved, the contrast and the definition of the fusion image are improved, the observation and the judgment of human eyes are facilitated, and the fusion image can be applied to the conditions of low illumination and severe weather and has wide social application value.
(3) The invention is not only suitable for the fusion of gray level images, but also suitable for the fusion of color images, and the resolution ratio of the original image is more than 640 x 480.
(4) The invention has wide application value in the aspects of video monitoring, intelligent transportation, medical diagnosis, machine vision, national defense safety and the like.
Drawings
FIG. 1 is a flow chart of an algorithm in an application example of the present invention;
FIG. 2 is a schematic view of a pan/tilt head of an image capture system in an application example of the present invention;
FIG. 3(a) is an original low-luminance color visible image of scene 1 in an exemplary embodiment of the present invention;
FIG. 3(b) is an original infrared image of scene 1 in an application example of the present invention;
fig. 4(a) is a luminance image of a scene 1 after feature registration in an application example of the present invention;
fig. 4(b) is an infrared image of a scene 1 after feature registration in an application example of the present invention;
FIG. 5 is a color-blended image after scene 1 blending in an application example;
FIG. 6(a) is an original low-luminance color visible image of scene 2 in an exemplary embodiment of the present invention;
FIG. 6(b) is an original low-illumination infrared image of scene 2 in an application example of the present invention;
fig. 7(a) is a luminance image of a scene 2 after feature registration in an application example of the present invention;
fig. 7(b) is an infrared image of a scene 2 after feature registration in an application example of the present invention;
fig. 8 is a color fusion image after scene 2 fusion in the application example of the present invention.
Detailed Description
Comprises the following steps:
(1) acquiring an original low-illumination visible light image and an infrared image:
as shown in fig. 2, in the pan-tilt system for image acquisition according to the present invention, under a low illumination condition, an infrared camera and a visible light camera on the pan-tilt respectively acquire an original color visible light image and an infrared image, and the resolutions of the images are 640 × 480;
(2) and matching the obtained original infrared image and the obtained color visible light image with scale-invariant feature points:
because the camera positions are different, the focal lengths of the lenses are different and other external environments are influenced, images at the same position in the scene are at different positions in the color visible light image and the infrared image collected in the step (1), and therefore image registration is carried out according to the scale invariant features of the original infrared image and the color visible light image, and images at any same position in the scene are also at the same position in the collected images;
(3) extracting a brightness image of the registered color visible light image by using HIS transformation; according to the characteristics of the infrared sensor, enhancing the contrast of the infrared image after registration:
after registration, the color visible light image is a three-dimensional image, the infrared image is a two-dimensional image, and fusion errors can be caused by different image space dimensions, so that the brightness image I of the color visible light image is extracted by adopting color space HIS transformationvisibleThe calculation formula is shown as (1):
wherein, Ir、Ig、IbPixel values of RGB three channels respectively expressed as a low-illumination visible light image;
the infrared sensor performs imaging by infrared radiation of an object in a scene, and the object (such as a person) is often high in brightness value in the image, but the object observed by human eyes is often low in brightness under a low-illumination condition at night. Therefore, the pixel brightness value of the infrared image is inverted, and the contrast of the infrared image is enhanced.
IIR=L-Iir(2)
Wherein, IIRFor the infrared image after reflection removal, IirFor the registered infrared image, L is the gray level of the infrared image, and when the pixel of the infrared image is 8bit, L is 2^8 ^ 256;
(4) adopting NSCT transformation to convert the brightness image IvisibleAnd an infrared image IIRPerforming multi-scale decomposition to obtain corresponding low-frequency components and high-frequency components respectively:
the NSCT transformation not only has the characteristics of Contourlet transformation such as multiresolution, localization and multi-directionality, but also has translation invariance, can eliminate the Gibbs phenomenon, and comprises two parts: a non-subsampled pyramid filter and a non-subsampled directional filter bank, the non-subsampled pyramid filter bank being used to implement a multi-scale decomposition process. The non-subsampling direction filter bank is used for realizing the decomposition in the frequency domain direction;
in the non-subsampled pyramid filter bank, the kth-level non-subsampled pyramid filter can be obtained by the following formula:
the non-subsampled pyramid filter also needs to satisfy the Bezout identity:
H0(z)G0(z)+H1(z)G1(z)=1 (4)
wherein H0(z)、G0Low pass decomposition filter and synthesis with (z) non-subsampled pyramid filterA filter; h1(z)、G1(z) high pass decomposition and synthesis filters that are non-subsampled pyramid filters:
the non-subsampling directional filter bank is composed of a single fan-shaped filter, a sampling link is removed in order to achieve directional translation invariance of high-frequency components, the directional responses at lower and higher frequencies on the pyramid high-layer subbands easily cause a frequency spectrum aliasing phenomenon, and the non-subsampling directional filter is subjected to upsampling operation, so that the frequency spectrum aliasing phenomenon can be effectively eliminated.
According to the non-subsampling pyramid and the non-subsampling direction filter bank, multi-scale transformation is carried out on the brightness image and the infrared image, and corresponding high-frequency components are extractedAnd low frequency componentsWherein J is more than or equal to d and more than or equal to 1, and J represents the total decomposition layer number of the image. The larger the total decomposition layer number is, the longer the algorithm runs, and J is 2, k is [2,16 ] to ensure the multi-scale and multi-directional characteristics of image decomposition];
(5) Using different fusion criteria for high-frequency componentsAnd low frequency componentsRespectively fusing to obtain fused low-frequency components and fused high-frequency components;
the low-frequency component reflects the general picture information of the original image, the high-frequency component represents the texture detail information of the image, and different criteria are adopted for fusion respectively according to the characteristics of the high-frequency component and the low-frequency component to obtain the fused high-frequency component and the fused low-frequency component;
1) low frequency component fusion
And (4) for the corresponding low-frequency component obtained in the step (4), obtaining a fused low-frequency component by adopting an adaptive threshold fusion criterion:
wherein,for the fused low-frequency component, ITHIs a brightness threshold value, wthAs the weight coefficient,corresponding low-frequency components of the brightness image and the infrared image respectively;
for low-frequency components of the brightness image and the infrared image, fusion is carried out by adopting a fusion criterion of self-adaptive threshold, the overall average brightness value of the low-illumination visible light image is lower, wherein most of high-brightness pixel points are derived from background light, such as: car lights, street lights or other lighting devices, etc. Experimentally, the maximum corresponding to a low frequency component difference of 0.13% is derived from the background light, and the threshold is determined by equation (5), where w is 0.75 in the example. As can be seen from the formula (6), when the low-frequency component of the luminance image is greater than the threshold value, the low-frequency component of the luminance image is selected as the fused low-frequency component, and when the low-frequency component of the luminance image is less than or equal to the threshold value, the low-frequency components of the luminance image and the infrared image are arithmetically averaged to calculate the fused low-frequency component;
2) high frequency component fusion
And (3) for the corresponding high-frequency component obtained in the step (4), obtaining a fused high-frequency component by adopting a fusion criterion of selecting the ignition times of the pulse coupling neural network, wherein the pulse coupling neural network is formed by connecting a feedback network and a plurality of neurons, each neuron mainly comprises a receiving part, a modulating part and a pulse generator, and the high-frequency component coefficient is the feedback input of the triggering pulse coupling neural network, and is shown as a formula (7):
wherein,is the output of the feedback section and is,is an input signal, (i, j) represents a pixel position, k represents the direction number of the d-th layer high-frequency component, and n' is the current iteration number;
the connection part of the pulse coupled neural network can be obtained from equation (8).
WhereinIs the output of the connecting portion, m and n are the ranges of connecting neurons, VLTo normalize the coefficients, Wij,mnWeight coefficients for other neurons connected;
the internal operation process of the pulse coupling neural network is obtained by calculation of a formula (9) and a formula (10);
wherein,is the internal state, β, aθAnd VθIn order to be a fixed factor,is a threshold value;
each iteration is as follows:
wherein X represents an original luminance image or an infrared image; n is the total iteration number;represents the total number of ignitions;
and selecting the corresponding high-frequency component with large ignition frequency as the fused high-frequency component by comparing the ignition frequency:
for the high-frequency components of the brightness image and the infrared image, fusing by adopting a criterion of taking a large number of ignition times based on a pulse coupling neural network, triggering the pulse coupling neural network by using a formula (7) according to the corresponding high-frequency components, calculating according to formulas (9) to (12) to obtain the corresponding ignition times, and selecting the high-frequency components corresponding to the large ignition times as the fused high-frequency components by comparing the ignition times by using a formula (13);
(6) and carrying out NSCT inverse transformation on the fused low-frequency component and high-frequency component to obtain a fused gray image IGray-F;
For the gray level fusion image obtained by carrying out multi-scale inverse transformation on the low-illumination gray level visible light image and the infrared image serving as the original image, the infrared image information of a low-illumination area is kept, meanwhile, the texture information of the low-illumination gray level image is kept, the contrast and the definition of the original visible light gray level image are improved, and people or other target objects in a scene can be judged and identified more accurately by human eyes;
(7) and (4) carrying out weight summation on the fused gray-scale image obtained in the step (6) and the registered color visible light image to obtain a final color fused image IF;
Wherein C represents R, G, B color channels;original visible light pixel values representing the C color channel;the C-channel final fused image pixel value is represented, w is a weighting coefficient, and w is generally 0.5.
Under the low illumination condition, the average brightness value of the color visible light image is low. In order to ensure that more color information of the visible light image is reserved in the color fusion image, the gray level fusion image obtained in the step 6 and the original color visible light image are subjected to weight summation through a formula (14) to obtain a final color fusion image.
The invention is further described below with reference to specific application examples and the accompanying drawings.
(1) And acquiring an original low-illumination visible light image and an infrared image.
In a low-illumination environment, an original low-illumination visible light image and an original infrared image are acquired through an infrared camera and a visible light camera of the holder system shown in fig. 2, and the resolution is 640 x 480. As shown in FIG. 2;
(2) matching the obtained original infrared image and the color visible light image with the scale-invariant feature points;
(3) extracting a brightness image of the registered color visible light image by using HIS transformation; enhancing the contrast of the infrared image after registration according to the characteristics of the infrared sensor;
(4) using NSCT transform, the luminance image IvisibleAnd an infrared image IIRPerforming multi-scale decomposition to obtain corresponding low-frequency components and high-frequency components respectively;
(5) using different fusion criteria for high-frequency componentsAnd low frequency componentsRespectively fusing to obtain fused low-frequency components and fused high-frequency components;
the low-frequency component reflects the general picture information of the original image, the high-frequency component represents the texture detail information of the image, and different criteria are adopted for fusion respectively according to the characteristics of the high-frequency component and the low-frequency component to obtain the fused high-frequency component and the fused low-frequency component;
for the corresponding low-frequency components of the brightness image and the infrared image, adopting a fusion criterion of a self-adaptive threshold value to perform fusion, wherein the low-illumination visible light image has a low overall average brightness value which can be obtained by a formula (5) and a formula (6);
for the corresponding high-frequency components of the brightness image and the infrared image, fusing by adopting a criterion of taking a large number of ignition times based on a pulse coupling neural network, triggering the pulse coupling neural network by utilizing a formula (7), calculating according to formulas (8) to (12) to obtain the corresponding ignition times, and selecting the high-frequency component corresponding to the large ignition times as the fused high-frequency component by comparing the ignition times through a formula (13);
(6) NSCT inverse transformation is carried out on the fused low-frequency component and the fused high-frequency component, and then a fused gray image I is obtainedGray-F;、
(7) And (4) carrying out weight summation on the fused gray image obtained in the step (6) and the registered color visible light image to obtain a final color fused image IF;
Under the low illumination condition, the average brightness value of the color visible light image is low, the dynamic range is small, and the scene target information is unclear, as shown in fig. 3(a) and fig. 6(a), it is difficult to observe the outlines, trunks, etc. of pedestrians and vehicles in the scene; the infrared image collected by the infrared radiation of the scene target can clearly observe the target information which cannot be seen by the visible light sensor, but lacks color information, as shown in fig. 3(b) and fig. 6 (b);
as shown in fig. 4, the fusion image effectively retains color information of a street lamp, a tail lamp of an automobile and a sky area in the visible light image, and retains texture detail information of outlines (such as wheel parts) of the automobile, pedestrians, support bars of the street lamp and the like in the infrared image, as shown in fig. 6, the fusion image effectively retains color information of the automobile, lights of the street lamp, buildings and the like, and retains texture detail information of pedestrians, tree poles and the like in the infrared image; therefore, the method effectively fuses the infrared image and the color visible light image, not only can retain the color information of the visible light image, but also can obtain the texture detail information of the infrared image scene target, and can be widely applied to the fields of video monitoring, intelligent transportation, national defense safety and the like.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiment, and all technical solutions belonging to the principle of the present invention belong to the protection scope of the present invention.
Claims (6)
1. A self-adaptive low-illumination visible light image and infrared image fusion method is characterized by comprising the following steps:
(1) acquiring an original low-illumination visible light image and an infrared image:
under the condition of low illumination, an infrared camera and a visible camera on a holder are used for respectively collecting an original color visible image and an original infrared image,
(2) and matching the obtained original infrared image and the obtained color visible light image with scale-invariant feature points:
according to the scale invariant features of the original infrared image and the color visible light image, image registration is carried out, and images at any same position in a scene are ensured to be at the same position in the acquired image;
(3) extracting a brightness image of the registered color visible light image by using HIS transformation; according to the characteristics of the infrared sensor, enhancing the contrast of the infrared image after registration:
(4) adopting NSCT transformation to convert the brightness image IvisibleAnd an infrared image IIRPerforming multi-scale decomposition to obtain corresponding low-frequency components and high-frequency components respectively:
the device comprises two parts: the system comprises a non-subsampling pyramid filter and a non-subsampling direction filter bank, wherein the non-subsampling pyramid filter bank is used for realizing a multi-scale decomposition process, and the non-subsampling direction filter bank is used for realizing the decomposition of a frequency domain direction;
(5) using different fusion criteria for high-frequency componentsAnd low frequency componentsRespectively fusing to obtain fused low-frequency components and fused high-frequency components;
the low-frequency component reflects the general picture information of the original image, the high-frequency component represents the texture detail information of the image, and different criteria are adopted for fusion respectively according to the characteristics of the high-frequency component and the low-frequency component to obtain the fused high-frequency component and the fused low-frequency component;
(6) and carrying out NSCT inverse transformation on the fused low-frequency component and high-frequency component to obtain a fused gray image IGray-F;
For the gray level fusion image obtained by carrying out multi-scale inverse transformation on the low-illumination gray level visible light image and the infrared image serving as the original image, the infrared image information of a low-illumination area is kept, meanwhile, the texture information of the low-illumination gray level image is kept, the contrast and the definition of the original visible light gray level image are improved, and people or other target objects in a scene can be judged and identified more accurately by human eyes;
(7) and (4) carrying out weight summation on the fused gray-scale image obtained in the step (6) and the registered color visible light image to obtain a final color fused image IF;
Wherein C represents R, G, B color channels;original visible light pixel values representing the C color channel;representing the final fused image pixel value of the C channel, wherein w is a weighting coefficient;
under the condition of low illumination, the average brightness value of the color visible light image is lower, in order to ensure that more color information of the visible light image is reserved for the color fusion image, the gray level fusion image obtained in the step 6 and the original color visible light image are subjected to weight summation through a formula (14), and a final color fusion image is obtained.
2. The adaptive low-illumination visible light image and infrared image fusion method according to claim 1, characterized in that: the resolution of the original color visible light image and the infrared image collected in the step (1) are 640 x 480.
3. The adaptive low-illumination visible light image and infrared image fusion method according to claim 1, characterized in that: in the step (3), after registration, the color visible light image is a three-dimensional image, the infrared image is a two-dimensional image, and the image space dimensions are different, so that the brightness image I of the color visible light image is extracted by adopting color space HIS transformationvisibleThe calculation formula is shown as (1):
wherein, Ir、Ig、IbPixel values of RGB three channels respectively expressed as a low-illumination visible light image;
the pixel brightness value of the infrared image is inverted, so that the contrast of the infrared image is enhanced;
IIR=L-Iir(2)
wherein, IIRFor the infrared image after reflection removal, IirFor the registered infrared image, L is the gray level of the infrared image, and when the pixel of the infrared image is 8bit, L ^ 2^8 ^ 256.
4. The adaptive low-illumination visible light image and infrared image fusion method according to claim 1, wherein in the step (4):
in the non-subsampled pyramid filter bank, the kth-level non-subsampled pyramid filter can be obtained by the following formula:
the non-subsampled pyramid filter also needs to satisfy the Bezout identity:
H0(z)G0(z)+H1(z)G1(z)=1 (4)
wherein H0(z)、G0(z) low pass decomposition and synthesis filters that are non-subsampled pyramid filters; h1(z)、G1(z) high pass decomposition and synthesis filters that are non-subsampled pyramid filters:
the non-subsampling direction filter bank is composed of a single sector filter, and performs upsampling operation on the non-subsampling direction filter, so that the frequency spectrum aliasing phenomenon can be effectively eliminated.
According to the non-subsampling pyramid and the non-subsampling direction filter bank, multi-scale transformation is carried out on the brightness image and the infrared image, and corresponding high-frequency components are extractedAnd low frequency componentsWherein J is more than or equal to d and more than or equal to 1, and J represents the total decomposition layer number of the image. The larger the total decomposition layer number is, the longer the algorithm runs, and J is 2, k is [2,16 ] to ensure the multi-scale and multi-directional characteristics of image decomposition]。
5. The adaptive low-illumination visible light image and infrared image fusion method according to claim 1, wherein the low-frequency component fusion method in step (5) is as follows:
and (4) for the corresponding low-frequency component obtained in the step (4), obtaining a fused low-frequency component by adopting an adaptive threshold fusion criterion:
wherein,for the fused low-frequency component, ITHIs a brightness threshold value, wthAs the weight coefficient,corresponding low-frequency components of the brightness image and the infrared image respectively;
and for the low-frequency components of the brightness image and the infrared image, adopting a fusion criterion of a self-adaptive threshold value to perform fusion, selecting the low-frequency components of the brightness image as the fused low-frequency components when the low-frequency components of the brightness image are larger than the threshold value, performing arithmetic mean on the low-frequency components of the brightness image and the infrared image when the low-frequency components of the brightness image are smaller than or equal to the threshold value, and calculating the fused low-frequency components.
6. The adaptive low-illumination visible light image and infrared image fusion method according to claim 1, wherein the high-frequency component fusion method in step (5) is as follows:
and (3) for the corresponding high-frequency component obtained in the step (4), obtaining a fused high-frequency component by adopting a fusion criterion of selecting the ignition times of the pulse coupling neural network, wherein the pulse coupling neural network is formed by connecting a feedback network and a plurality of neurons, each neuron mainly comprises a receiving part, a modulating part and a pulse generator, and the high-frequency component coefficient is the feedback input of the triggering pulse coupling neural network, and is shown as a formula (7):
wherein,is the output of the feedback section and is,is an input signal, (i, j) represents a pixel position, k represents the direction number of the d-th layer high-frequency component, and n' is the current iteration number;
the connection part of the pulse coupled neural network can be obtained from equation (8).
WhereinIs the output of the connecting portion, m and n are the ranges of connecting neurons, VLTo normalize the coefficients, Wij,mnWeight coefficients for other neurons connected;
the internal operation process of the pulse coupling neural network is obtained by calculation of a formula (9) and a formula (10);
wherein,is the internal state, β, aθAnd VθIn order to be a fixed factor,is a threshold value;
each iteration is as follows:
wherein X represents an original luminance image or an infrared image; n is the total iteration number;represents the total number of ignitions;
and selecting the corresponding high-frequency component with large ignition frequency as the fused high-frequency component by comparing the ignition frequency:
and for the high-frequency components of the brightness image and the infrared image, fusing by adopting a criterion that the ignition frequency based on the pulse coupling neural network is large, triggering the pulse coupling neural network by using a formula (7) according to the corresponding high-frequency components, calculating according to formulas (9) to (12) to obtain the corresponding ignition frequency, and selecting the high-frequency component corresponding to the large ignition frequency as the fused high-frequency component by comparing the ignition frequency by using a formula (13).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611142487.2A CN106600572A (en) | 2016-12-12 | 2016-12-12 | Adaptive low-illumination visible image and infrared image fusion method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611142487.2A CN106600572A (en) | 2016-12-12 | 2016-12-12 | Adaptive low-illumination visible image and infrared image fusion method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106600572A true CN106600572A (en) | 2017-04-26 |
Family
ID=58597724
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611142487.2A Pending CN106600572A (en) | 2016-12-12 | 2016-12-12 | Adaptive low-illumination visible image and infrared image fusion method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106600572A (en) |
Cited By (52)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107169944A (en) * | 2017-04-21 | 2017-09-15 | 北京理工大学 | A kind of infrared and visible light image fusion method based on multiscale contrast |
CN107203987A (en) * | 2017-06-07 | 2017-09-26 | 云南师范大学 | A kind of infrared image and low-light (level) image real time fusion system |
CN107451984A (en) * | 2017-07-27 | 2017-12-08 | 桂林电子科技大学 | A kind of infrared and visual image fusion algorithm based on mixing multiscale analysis |
CN107705268A (en) * | 2017-10-20 | 2018-02-16 | 天津工业大学 | One kind is based on improved Retinex and the enhancing of Welsh near-infrared images and colorization algorithm |
CN107798854A (en) * | 2017-11-12 | 2018-03-13 | 佛山鑫进科技有限公司 | A kind of ammeter long-distance monitoring method based on image recognition |
CN107909562A (en) * | 2017-12-05 | 2018-04-13 | 华中光电技术研究所(中国船舶重工集团公司第七七研究所) | A kind of Fast Image Fusion based on Pixel-level |
CN108053371A (en) * | 2017-11-30 | 2018-05-18 | 努比亚技术有限公司 | A kind of image processing method, terminal and computer readable storage medium |
CN108182698A (en) * | 2017-12-18 | 2018-06-19 | 凯迈(洛阳)测控有限公司 | A kind of fusion method of airborne photoelectric infrared image and visible images |
CN108427922A (en) * | 2018-03-06 | 2018-08-21 | 深圳市创艺工业技术有限公司 | A kind of efficient indoor environment regulating system |
CN108428224A (en) * | 2018-01-09 | 2018-08-21 | 中国农业大学 | Animal body surface temperature checking method and device based on convolutional Neural net |
CN108460786A (en) * | 2018-01-30 | 2018-08-28 | 中国航天电子技术研究院 | A kind of high speed tracking of unmanned plane spot |
CN108564543A (en) * | 2018-04-11 | 2018-09-21 | 长春理工大学 | A kind of underwater picture color compensation method based on electromagnetic theory |
CN108665487A (en) * | 2017-10-17 | 2018-10-16 | 国网河南省电力公司郑州供电公司 | Substation's manipulating object and object localization method based on the fusion of infrared and visible light |
CN108710910A (en) * | 2018-05-18 | 2018-10-26 | 中国科学院光电研究院 | A kind of target identification method and system based on convolutional neural networks |
CN108717689A (en) * | 2018-05-16 | 2018-10-30 | 北京理工大学 | Middle LONG WAVE INFRARED image interfusion method and device applied to naval vessel detection field under sky and ocean background |
CN108961180A (en) * | 2018-06-22 | 2018-12-07 | 理光软件研究所(北京)有限公司 | infrared image enhancing method and system |
CN109100364A (en) * | 2018-06-29 | 2018-12-28 | 杭州国翌科技有限公司 | A kind of tunnel defect monitoring system and monitoring method based on spectrum analysis |
CN109785277A (en) * | 2018-12-11 | 2019-05-21 | 南京第五十五所技术开发有限公司 | A kind of infrared and visible light image fusion method in real time |
CN109949353A (en) * | 2019-03-25 | 2019-06-28 | 北京理工大学 | A kind of low-light (level) image natural sense colorization method |
WO2019153920A1 (en) * | 2018-02-09 | 2019-08-15 | 华为技术有限公司 | Method for image processing and related device |
CN110210541A (en) * | 2019-05-23 | 2019-09-06 | 浙江大华技术股份有限公司 | Image interfusion method and equipment, storage device |
CN110223262A (en) * | 2018-12-28 | 2019-09-10 | 中国船舶重工集团公司第七一七研究所 | A kind of rapid image fusion method based on Pixel-level |
CN110246108A (en) * | 2018-11-21 | 2019-09-17 | 浙江大华技术股份有限公司 | A kind of image processing method, device and computer readable storage medium |
CN110363732A (en) * | 2018-04-11 | 2019-10-22 | 杭州海康威视数字技术股份有限公司 | A kind of image interfusion method and its device |
CN110363731A (en) * | 2018-04-10 | 2019-10-22 | 杭州海康威视数字技术股份有限公司 | A kind of image interfusion method, device and electronic equipment |
CN110490914A (en) * | 2019-07-29 | 2019-11-22 | 广东工业大学 | It is a kind of based on brightness adaptively and conspicuousness detect image interfusion method |
WO2020051897A1 (en) * | 2018-09-14 | 2020-03-19 | 浙江宇视科技有限公司 | Image fusion method and system, electronic device, and computer readable storage medium |
CN111008946A (en) * | 2019-11-07 | 2020-04-14 | 武汉多谱多勒科技有限公司 | Infrared and visible light image intelligent fusion device and method used in fire fighting site |
CN111160171A (en) * | 2019-12-19 | 2020-05-15 | 哈尔滨工程大学 | Radiation source signal identification method combining two-domain multi-features |
WO2020112442A1 (en) * | 2018-11-27 | 2020-06-04 | Google Llc | Methods and systems for colorizing infrared images |
CN111325701A (en) * | 2018-12-14 | 2020-06-23 | 杭州海康威视数字技术股份有限公司 | Image processing method, device and storage medium |
CN111385466A (en) * | 2018-12-30 | 2020-07-07 | 浙江宇视科技有限公司 | Automatic focusing method, device, equipment and storage medium |
CN111476732A (en) * | 2020-04-03 | 2020-07-31 | 江苏宇特光电科技股份有限公司 | Image fusion and denoising method and system |
CN111612736A (en) * | 2020-04-08 | 2020-09-01 | 广东电网有限责任公司 | Power equipment fault detection method, computer and computer program |
WO2020237931A1 (en) * | 2019-05-24 | 2020-12-03 | Zhejiang Dahua Technology Co., Ltd. | Systems and methods for image processing |
CN112132753A (en) * | 2020-11-06 | 2020-12-25 | 湖南大学 | Infrared image super-resolution method and system for multi-scale structure guide image |
CN112258442A (en) * | 2020-11-12 | 2021-01-22 | Oppo广东移动通信有限公司 | Image fusion method and device, computer equipment and storage medium |
CN112307901A (en) * | 2020-09-28 | 2021-02-02 | 国网浙江省电力有限公司电力科学研究院 | Landslide detection-oriented SAR and optical image fusion method and system |
US10942274B2 (en) | 2018-04-11 | 2021-03-09 | Microsoft Technology Licensing, Llc | Time of flight and picture camera |
CN112487947A (en) * | 2020-11-26 | 2021-03-12 | 西北工业大学 | Low-illumination image target detection method based on image fusion and target detection network |
CN112767289A (en) * | 2019-10-21 | 2021-05-07 | 浙江宇视科技有限公司 | Image fusion method, device, medium and electronic equipment |
CN112767291A (en) * | 2021-01-04 | 2021-05-07 | 浙江大华技术股份有限公司 | Visible light image and infrared image fusion method and device and readable storage medium |
CN113223033A (en) * | 2021-05-10 | 2021-08-06 | 广州朗国电子科技有限公司 | Poultry body temperature detection method, device and medium based on image fusion |
CN113538303A (en) * | 2020-04-20 | 2021-10-22 | 杭州海康威视数字技术股份有限公司 | Image fusion method |
CN113822833A (en) * | 2021-09-26 | 2021-12-21 | 沈阳航空航天大学 | Infrared and visible light image frequency domain fusion method based on convolutional neural network and regional energy |
CN114581315A (en) * | 2022-01-05 | 2022-06-03 | 中国民用航空飞行学院 | Low-visibility approach flight multi-mode monitoring image enhancement method |
CN114708181A (en) * | 2022-04-18 | 2022-07-05 | 烟台艾睿光电科技有限公司 | Image fusion method, device, equipment and storage medium |
CN114862730A (en) * | 2021-02-04 | 2022-08-05 | 四川大学 | Infrared and visible light image fusion method based on multi-scale analysis and VGG-19 |
CN114881905A (en) * | 2022-06-21 | 2022-08-09 | 西北工业大学 | Processing method for fusing infrared image and visible light image based on wavelet transformation |
CN115086573A (en) * | 2022-05-19 | 2022-09-20 | 北京航天控制仪器研究所 | External synchronous exposure-based heterogeneous video fusion method and system |
CN116681636A (en) * | 2023-07-26 | 2023-09-01 | 南京大学 | Light infrared and visible light image fusion method based on convolutional neural network |
US11798147B2 (en) | 2018-06-30 | 2023-10-24 | Huawei Technologies Co., Ltd. | Image processing method and device |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101093580A (en) * | 2007-08-29 | 2007-12-26 | 华中科技大学 | Image interfusion method based on wave transform of not sub sampled contour |
CN101504766A (en) * | 2009-03-25 | 2009-08-12 | 湖南大学 | Image amalgamation method based on mixed multi-resolution decomposition |
CN101546428A (en) * | 2009-05-07 | 2009-09-30 | 西北工业大学 | Image fusion of sequence infrared and visible light based on region segmentation |
CN102646272A (en) * | 2012-02-23 | 2012-08-22 | 南京信息工程大学 | Wavelet meteorological satellite cloud image merging method based on local variance and weighing combination |
CN102722877A (en) * | 2012-06-07 | 2012-10-10 | 内蒙古科技大学 | Multi-focus image fusing method based on dual-channel PCNN (Pulse Coupled Neural Network) |
CN103177433A (en) * | 2013-04-09 | 2013-06-26 | 南京理工大学 | Infrared and low light image fusion method |
CN104200452A (en) * | 2014-09-05 | 2014-12-10 | 西安电子科技大学 | Method and device for fusing infrared and visible light images based on spectral wavelet transformation |
CN104282007A (en) * | 2014-10-22 | 2015-01-14 | 长春理工大学 | Contourlet transformation-adaptive medical image fusion method based on non-sampling |
CN105069768A (en) * | 2015-08-05 | 2015-11-18 | 武汉高德红外股份有限公司 | Visible-light image and infrared image fusion processing system and fusion method |
CN105809640A (en) * | 2016-03-09 | 2016-07-27 | 长春理工大学 | Multi-sensor fusion low-illumination video image enhancement method |
-
2016
- 2016-12-12 CN CN201611142487.2A patent/CN106600572A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101093580A (en) * | 2007-08-29 | 2007-12-26 | 华中科技大学 | Image interfusion method based on wave transform of not sub sampled contour |
CN101504766A (en) * | 2009-03-25 | 2009-08-12 | 湖南大学 | Image amalgamation method based on mixed multi-resolution decomposition |
CN101546428A (en) * | 2009-05-07 | 2009-09-30 | 西北工业大学 | Image fusion of sequence infrared and visible light based on region segmentation |
CN102646272A (en) * | 2012-02-23 | 2012-08-22 | 南京信息工程大学 | Wavelet meteorological satellite cloud image merging method based on local variance and weighing combination |
CN102722877A (en) * | 2012-06-07 | 2012-10-10 | 内蒙古科技大学 | Multi-focus image fusing method based on dual-channel PCNN (Pulse Coupled Neural Network) |
CN103177433A (en) * | 2013-04-09 | 2013-06-26 | 南京理工大学 | Infrared and low light image fusion method |
CN104200452A (en) * | 2014-09-05 | 2014-12-10 | 西安电子科技大学 | Method and device for fusing infrared and visible light images based on spectral wavelet transformation |
CN104282007A (en) * | 2014-10-22 | 2015-01-14 | 长春理工大学 | Contourlet transformation-adaptive medical image fusion method based on non-sampling |
CN105069768A (en) * | 2015-08-05 | 2015-11-18 | 武汉高德红外股份有限公司 | Visible-light image and infrared image fusion processing system and fusion method |
CN105809640A (en) * | 2016-03-09 | 2016-07-27 | 长春理工大学 | Multi-sensor fusion low-illumination video image enhancement method |
Non-Patent Citations (1)
Title |
---|
SHUO LIU等: "《Research on fusion technology based on low-light visible image and infrared image》", 《OPTICAL ENGINEERING》 * |
Cited By (82)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107169944A (en) * | 2017-04-21 | 2017-09-15 | 北京理工大学 | A kind of infrared and visible light image fusion method based on multiscale contrast |
CN107203987A (en) * | 2017-06-07 | 2017-09-26 | 云南师范大学 | A kind of infrared image and low-light (level) image real time fusion system |
CN107451984A (en) * | 2017-07-27 | 2017-12-08 | 桂林电子科技大学 | A kind of infrared and visual image fusion algorithm based on mixing multiscale analysis |
CN107451984B (en) * | 2017-07-27 | 2021-06-22 | 桂林电子科技大学 | Infrared and visible light image fusion algorithm based on mixed multi-scale analysis |
CN108665487A (en) * | 2017-10-17 | 2018-10-16 | 国网河南省电力公司郑州供电公司 | Substation's manipulating object and object localization method based on the fusion of infrared and visible light |
CN108665487B (en) * | 2017-10-17 | 2022-12-13 | 国网河南省电力公司郑州供电公司 | Transformer substation operation object and target positioning method based on infrared and visible light fusion |
CN107705268A (en) * | 2017-10-20 | 2018-02-16 | 天津工业大学 | One kind is based on improved Retinex and the enhancing of Welsh near-infrared images and colorization algorithm |
CN107798854A (en) * | 2017-11-12 | 2018-03-13 | 佛山鑫进科技有限公司 | A kind of ammeter long-distance monitoring method based on image recognition |
CN108053371A (en) * | 2017-11-30 | 2018-05-18 | 努比亚技术有限公司 | A kind of image processing method, terminal and computer readable storage medium |
CN107909562A (en) * | 2017-12-05 | 2018-04-13 | 华中光电技术研究所(中国船舶重工集团公司第七七研究所) | A kind of Fast Image Fusion based on Pixel-level |
CN107909562B (en) * | 2017-12-05 | 2021-06-08 | 华中光电技术研究所(中国船舶重工集团公司第七一七研究所) | Fast image fusion algorithm based on pixel level |
CN108182698A (en) * | 2017-12-18 | 2018-06-19 | 凯迈(洛阳)测控有限公司 | A kind of fusion method of airborne photoelectric infrared image and visible images |
CN108428224B (en) * | 2018-01-09 | 2020-05-22 | 中国农业大学 | Animal body surface temperature detection method and device based on convolutional neural network |
CN108428224A (en) * | 2018-01-09 | 2018-08-21 | 中国农业大学 | Animal body surface temperature checking method and device based on convolutional Neural net |
CN108460786A (en) * | 2018-01-30 | 2018-08-28 | 中国航天电子技术研究院 | A kind of high speed tracking of unmanned plane spot |
CN110136183A (en) * | 2018-02-09 | 2019-08-16 | 华为技术有限公司 | A kind of method and relevant device of image procossing |
JP2021513278A (en) * | 2018-02-09 | 2021-05-20 | 華為技術有限公司Huawei Technologies Co.,Ltd. | Image processing methods and related devices |
US11250550B2 (en) | 2018-02-09 | 2022-02-15 | Huawei Technologies Co., Ltd. | Image processing method and related device |
CN110136183B (en) * | 2018-02-09 | 2021-05-18 | 华为技术有限公司 | Image processing method and device and camera device |
WO2019153920A1 (en) * | 2018-02-09 | 2019-08-15 | 华为技术有限公司 | Method for image processing and related device |
CN108427922A (en) * | 2018-03-06 | 2018-08-21 | 深圳市创艺工业技术有限公司 | A kind of efficient indoor environment regulating system |
CN110363731A (en) * | 2018-04-10 | 2019-10-22 | 杭州海康威视数字技术股份有限公司 | A kind of image interfusion method, device and electronic equipment |
CN110363731B (en) * | 2018-04-10 | 2021-09-03 | 杭州海康微影传感科技有限公司 | Image fusion method and device and electronic equipment |
CN108564543A (en) * | 2018-04-11 | 2018-09-21 | 长春理工大学 | A kind of underwater picture color compensation method based on electromagnetic theory |
CN110363732A (en) * | 2018-04-11 | 2019-10-22 | 杭州海康威视数字技术股份有限公司 | A kind of image interfusion method and its device |
US10942274B2 (en) | 2018-04-11 | 2021-03-09 | Microsoft Technology Licensing, Llc | Time of flight and picture camera |
CN108717689A (en) * | 2018-05-16 | 2018-10-30 | 北京理工大学 | Middle LONG WAVE INFRARED image interfusion method and device applied to naval vessel detection field under sky and ocean background |
CN108710910B (en) * | 2018-05-18 | 2020-12-04 | 中国科学院光电研究院 | Target identification method and system based on convolutional neural network |
CN108710910A (en) * | 2018-05-18 | 2018-10-26 | 中国科学院光电研究院 | A kind of target identification method and system based on convolutional neural networks |
CN108961180A (en) * | 2018-06-22 | 2018-12-07 | 理光软件研究所(北京)有限公司 | infrared image enhancing method and system |
CN108961180B (en) * | 2018-06-22 | 2020-09-25 | 理光软件研究所(北京)有限公司 | Infrared image enhancement method and system |
CN109100364A (en) * | 2018-06-29 | 2018-12-28 | 杭州国翌科技有限公司 | A kind of tunnel defect monitoring system and monitoring method based on spectrum analysis |
US11798147B2 (en) | 2018-06-30 | 2023-10-24 | Huawei Technologies Co., Ltd. | Image processing method and device |
WO2020051897A1 (en) * | 2018-09-14 | 2020-03-19 | 浙江宇视科技有限公司 | Image fusion method and system, electronic device, and computer readable storage medium |
CN110246108A (en) * | 2018-11-21 | 2019-09-17 | 浙江大华技术股份有限公司 | A kind of image processing method, device and computer readable storage medium |
CN110246108B (en) * | 2018-11-21 | 2023-06-20 | 浙江大华技术股份有限公司 | Image processing method, device and computer readable storage medium |
US11875520B2 (en) | 2018-11-21 | 2024-01-16 | Zhejiang Dahua Technology Co., Ltd. | Method and system for generating a fusion image |
US11483451B2 (en) | 2018-11-27 | 2022-10-25 | Google Llc | Methods and systems for colorizing infrared images |
WO2020112442A1 (en) * | 2018-11-27 | 2020-06-04 | Google Llc | Methods and systems for colorizing infrared images |
CN109785277B (en) * | 2018-12-11 | 2022-10-04 | 南京第五十五所技术开发有限公司 | Real-time infrared and visible light image fusion method |
CN109785277A (en) * | 2018-12-11 | 2019-05-21 | 南京第五十五所技术开发有限公司 | A kind of infrared and visible light image fusion method in real time |
CN111325701B (en) * | 2018-12-14 | 2023-05-09 | 杭州海康微影传感科技有限公司 | Image processing method, device and storage medium |
CN111325701A (en) * | 2018-12-14 | 2020-06-23 | 杭州海康威视数字技术股份有限公司 | Image processing method, device and storage medium |
CN110223262A (en) * | 2018-12-28 | 2019-09-10 | 中国船舶重工集团公司第七一七研究所 | A kind of rapid image fusion method based on Pixel-level |
CN111385466A (en) * | 2018-12-30 | 2020-07-07 | 浙江宇视科技有限公司 | Automatic focusing method, device, equipment and storage medium |
CN111385466B (en) * | 2018-12-30 | 2021-08-24 | 浙江宇视科技有限公司 | Automatic focusing method, device, equipment and storage medium |
CN109949353A (en) * | 2019-03-25 | 2019-06-28 | 北京理工大学 | A kind of low-light (level) image natural sense colorization method |
CN110210541B (en) * | 2019-05-23 | 2021-09-03 | 浙江大华技术股份有限公司 | Image fusion method and device, and storage device |
CN110210541A (en) * | 2019-05-23 | 2019-09-06 | 浙江大华技术股份有限公司 | Image interfusion method and equipment, storage device |
WO2020237931A1 (en) * | 2019-05-24 | 2020-12-03 | Zhejiang Dahua Technology Co., Ltd. | Systems and methods for image processing |
EP3948766A4 (en) * | 2019-05-24 | 2022-07-06 | Zhejiang Dahua Technology Co., Ltd. | Systems and methods for image processing |
US12056848B2 (en) | 2019-05-24 | 2024-08-06 | Zhejiang Dahua Technology Co., Ltd. | Systems and methods for image processing |
CN110490914B (en) * | 2019-07-29 | 2022-11-15 | 广东工业大学 | Image fusion method based on brightness self-adaption and significance detection |
CN110490914A (en) * | 2019-07-29 | 2019-11-22 | 广东工业大学 | It is a kind of based on brightness adaptively and conspicuousness detect image interfusion method |
CN112767289A (en) * | 2019-10-21 | 2021-05-07 | 浙江宇视科技有限公司 | Image fusion method, device, medium and electronic equipment |
CN112767289B (en) * | 2019-10-21 | 2024-05-07 | 浙江宇视科技有限公司 | Image fusion method, device, medium and electronic equipment |
CN111008946A (en) * | 2019-11-07 | 2020-04-14 | 武汉多谱多勒科技有限公司 | Infrared and visible light image intelligent fusion device and method used in fire fighting site |
CN111160171A (en) * | 2019-12-19 | 2020-05-15 | 哈尔滨工程大学 | Radiation source signal identification method combining two-domain multi-features |
CN111160171B (en) * | 2019-12-19 | 2022-04-12 | 哈尔滨工程大学 | Radiation source signal identification method combining two-domain multi-features |
CN111476732A (en) * | 2020-04-03 | 2020-07-31 | 江苏宇特光电科技股份有限公司 | Image fusion and denoising method and system |
CN111612736A (en) * | 2020-04-08 | 2020-09-01 | 广东电网有限责任公司 | Power equipment fault detection method, computer and computer program |
CN113538303A (en) * | 2020-04-20 | 2021-10-22 | 杭州海康威视数字技术股份有限公司 | Image fusion method |
CN113538303B (en) * | 2020-04-20 | 2023-05-26 | 杭州海康威视数字技术股份有限公司 | Image fusion method |
CN112307901A (en) * | 2020-09-28 | 2021-02-02 | 国网浙江省电力有限公司电力科学研究院 | Landslide detection-oriented SAR and optical image fusion method and system |
CN112307901B (en) * | 2020-09-28 | 2024-05-10 | 国网浙江省电力有限公司电力科学研究院 | SAR and optical image fusion method and system for landslide detection |
CN112132753A (en) * | 2020-11-06 | 2020-12-25 | 湖南大学 | Infrared image super-resolution method and system for multi-scale structure guide image |
CN112132753B (en) * | 2020-11-06 | 2022-04-05 | 湖南大学 | Infrared image super-resolution method and system for multi-scale structure guide image |
CN112258442A (en) * | 2020-11-12 | 2021-01-22 | Oppo广东移动通信有限公司 | Image fusion method and device, computer equipment and storage medium |
CN112487947A (en) * | 2020-11-26 | 2021-03-12 | 西北工业大学 | Low-illumination image target detection method based on image fusion and target detection network |
CN112767291B (en) * | 2021-01-04 | 2024-05-28 | 浙江华感科技有限公司 | Visible light image and infrared image fusion method, device and readable storage medium |
CN112767291A (en) * | 2021-01-04 | 2021-05-07 | 浙江大华技术股份有限公司 | Visible light image and infrared image fusion method and device and readable storage medium |
CN114862730B (en) * | 2021-02-04 | 2023-05-23 | 四川大学 | Infrared and visible light image fusion method based on multi-scale analysis and VGG-19 |
CN114862730A (en) * | 2021-02-04 | 2022-08-05 | 四川大学 | Infrared and visible light image fusion method based on multi-scale analysis and VGG-19 |
CN113223033A (en) * | 2021-05-10 | 2021-08-06 | 广州朗国电子科技有限公司 | Poultry body temperature detection method, device and medium based on image fusion |
CN113822833A (en) * | 2021-09-26 | 2021-12-21 | 沈阳航空航天大学 | Infrared and visible light image frequency domain fusion method based on convolutional neural network and regional energy |
CN113822833B (en) * | 2021-09-26 | 2024-01-16 | 沈阳航空航天大学 | Infrared and visible light image frequency domain fusion method based on convolutional neural network and regional energy |
CN114581315A (en) * | 2022-01-05 | 2022-06-03 | 中国民用航空飞行学院 | Low-visibility approach flight multi-mode monitoring image enhancement method |
CN114708181A (en) * | 2022-04-18 | 2022-07-05 | 烟台艾睿光电科技有限公司 | Image fusion method, device, equipment and storage medium |
CN115086573A (en) * | 2022-05-19 | 2022-09-20 | 北京航天控制仪器研究所 | External synchronous exposure-based heterogeneous video fusion method and system |
CN114881905A (en) * | 2022-06-21 | 2022-08-09 | 西北工业大学 | Processing method for fusing infrared image and visible light image based on wavelet transformation |
CN116681636A (en) * | 2023-07-26 | 2023-09-01 | 南京大学 | Light infrared and visible light image fusion method based on convolutional neural network |
CN116681636B (en) * | 2023-07-26 | 2023-12-12 | 南京大学 | Light infrared and visible light image fusion method based on convolutional neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106600572A (en) | Adaptive low-illumination visible image and infrared image fusion method | |
Lee et al. | Brightness-based convolutional neural network for thermal image enhancement | |
CN109559310B (en) | Power transmission and transformation inspection image quality evaluation method and system based on significance detection | |
Zhang et al. | A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application | |
CN112184604B (en) | Color image enhancement method based on image fusion | |
CN110363140A (en) | A kind of human action real-time identification method based on infrared image | |
Aguilar et al. | Real-time fusion of low-light CCD and uncooled IR imagery for color night vision | |
Zin et al. | Fusion of infrared and visible images for robust person detection | |
CN105809640B (en) | Low illumination level video image enhancement based on Multi-sensor Fusion | |
CN108629757A (en) | Image interfusion method based on complex shear wave conversion Yu depth convolutional neural networks | |
CN106846289A (en) | A kind of infrared light intensity and polarization image fusion method based on conspicuousness migration with details classification | |
CN106815826A (en) | Night vision image Color Fusion based on scene Recognition | |
Chumuang et al. | CCTV based surveillance system for railway station security | |
CN102236785B (en) | Method for pedestrian matching between viewpoints of non-overlapped cameras | |
CN105913404A (en) | Low-illumination imaging method based on frame accumulation | |
Broggi et al. | Pedestrian detection on a moving vehicle: an investigation about near infra-red images | |
CN106886747A (en) | Ship Detection under a kind of complex background based on extension wavelet transformation | |
CN111311503A (en) | Night low-brightness image enhancement system | |
Junwu et al. | An infrared and visible image fusion algorithm based on LSWT-NSST | |
CN115761618A (en) | Key site security monitoring image identification method | |
CN108090397A (en) | Pedestrian detecting system based on infrared image | |
Das et al. | Color night vision for navigation and surveillance | |
CN110827375A (en) | Infrared image true color coloring method and system based on low-light-level image | |
CN104751138B (en) | A kind of vehicle mounted infrared image colorization DAS (Driver Assistant System) | |
Zhang et al. | Recognition of greenhouse cucumber fruit using computer vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170426 |
|
WD01 | Invention patent application deemed withdrawn after publication |