CN116071266A - Retinex-based low-light image enhancement method, storage medium and terminal - Google Patents

Retinex-based low-light image enhancement method, storage medium and terminal Download PDF

Info

Publication number
CN116071266A
CN116071266A CN202310159846.9A CN202310159846A CN116071266A CN 116071266 A CN116071266 A CN 116071266A CN 202310159846 A CN202310159846 A CN 202310159846A CN 116071266 A CN116071266 A CN 116071266A
Authority
CN
China
Prior art keywords
brightness
retinex
low
image
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310159846.9A
Other languages
Chinese (zh)
Inventor
高帅博
邬文慧
邱国平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Peng Cheng Laboratory
Original Assignee
Shenzhen University
Peng Cheng Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University, Peng Cheng Laboratory filed Critical Shenzhen University
Priority to CN202310159846.9A priority Critical patent/CN116071266A/en
Publication of CN116071266A publication Critical patent/CN116071266A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a Retinex-based low-light image enhancement method, a storage medium and a terminal, which comprise the steps of enhancing a brightness channel of a first low-light image by a brightness processing module to obtain a brightness enhancement channel; reconstructing the brightness enhancement channel and other color channels into a first enhanced image which meets the brightness requirement and has color degradation; processing the color degradation of the first enhanced image by using a color recovery module to obtain a second enhanced image; performing supervised learning on the second enhanced image by using the normal image to construct a low-light model applicable to the low-light image of the whole scene; and inputting the second low-light image into a pre-trained low-light model to obtain an enhanced target image. The invention applies the Retinex theory to enhance the brightness while maintaining the image texture, solves the texture degradation and the color degradation in two steps in turn, can more effectively treat various degradation, and improves the quality of the low-illumination image in the aspects of brightness, contrast, color, texture detail and the like.

Description

Retinex-based low-light image enhancement method, storage medium and terminal
Technical Field
The invention belongs to the technical field of digital image processing and computer vision, and particularly relates to a Retinex-based low-light image enhancement method, a storage medium and a terminal.
Background
Low-light images often have complex degradations such as low visibility, low contrast, hue shift, texture loss, noise and the like, and the current low-light enhancement methods based on models and deep learning often only consider a few degradations or put the complex degradations together for solving. These methods may have good results in terms of one of brightness, color, texture details, noise handling, but it is difficult to achieve satisfactory results in these respects at the same time. RRM can effectively remove noise, but performs poorly in improving image brightness; retinex-Net (Retinex is a calculation theory of constant perception of color, retinex is a synthetic word, and is composed of retina (retina) +cotex (cortex)) can effectively improve the brightness of an image, but serious noise and tone shift occur; zero-DCE (Zero-Reference Deep Curve Estimation, zero reference deep curve estimation) better solves the color shift, but the contrast is lower and the texture is unclear; RUAS, while improving image low contrast, often results in localized overexposure; AGLLNet (attention guided low-light Net, attention directing micro-optic network) shows very bright eyes in terms of color, brightness, but local texture recovery is poor, often artifacts appear; URetinex-Net performs well in texture and brightness, has low saturation and is not vivid enough.
Therefore, there is a need for a Retinex-based low-light image enhancement method that can improve the quality of low-light images in terms of brightness, contrast, color, and texture details.
Disclosure of Invention
The invention aims to provide a Retinex-based low-light image enhancement method, a storage medium and a terminal, which are used for solving the technical problems that the current model-based and deep learning-based low-light image enhancement methods often only consider a few degradations, or the complex and various degradations are solved together, so that the methods can possibly achieve better effects in one of brightness, color, texture details and noise processing, but are difficult to achieve satisfactory effects in the aspects at the same time.
To this end, the invention provides a Retinex-based low-light image enhancement method, comprising:
inputting a first low-light image, and enhancing a brightness channel of the first low-light image by a pre-constructed brightness processing module to obtain a brightness enhancement channel;
recombining a brightness enhancement channel with other color channels to obtain a first enhanced image which meets the brightness requirement and has color degradation, wherein the other color channels are other color channels except the brightness channel;
the pre-constructed color recovery module processes the color degradation of the first enhanced image to obtain a second enhanced image;
performing supervised learning on the second enhanced image by using the normal image, and updating parameters of the brightness processing module and the color recovery module to construct a low-light model applicable to the full-scene low-light image;
and inputting the second low-light image into a pre-trained low-light model to obtain an enhanced target image.
In a more preferred embodiment, the inputting the first low-light image, the pre-constructed brightness processing module enhancing the brightness channel of the first low-light image to obtain a brightness enhancement channel, and the steps include:
constructing an initialization decomposition module, and primarily decomposing a brightness channel of an image into R through the initialization decomposition module 0 And L 0 Wherein R is 0 For roughly estimating the reflected component of the object, L 0 Illuminating the component with ambient light for a rough estimate;
constructing an unfolding solution module, and solving a Retinex decomposition problem on a brightness channel through the unfolding solution module to obtain accurate R and L, wherein R is a reflection component of an object, and L is an ambient light irradiation component;
constructing a brightness fusion module, and fusing the reflection component of the accurate object obtained by constructing an unfolding solution module with the additional supplementary new illumination map through the brightness fusion module to obtain a brightness enhancement channel; wherein the additional new illumination map is an image with uniform brightness and sufficient brightness.
In a more preferred embodiment, the solving, by the expansion solving module, the Retinex decomposition problem on the luminance channel includes:
retinex decomposition on luminance channel:
Figure BDA0004093749130000021
wherein II V-R.L II F The number of the F-norms is the number of the F-norms,
Figure BDA0004093749130000022
reconstruction term, V is luminance channel, Ω 1 (R) and Ω 2 (L) is a canonical term for R and L, respectively, α 1 And alpha 2 Is a super parameter.
In a more preferred embodiment, the solving, by the expansion solving module, the Retinex decomposition problem on the luminance channel further includes:
estimating R through supervised learning, and calculating an ambient light irradiation component L according to the estimated R:
L=V/R
and decomposing Retinex on the brightness channel according to the obtained ambient light irradiation component L:
Figure BDA0004093749130000031
where α is a hyper-parameter and Ω (R) is a canonical term for R.
In a more preferred embodiment, the solving, by the expansion solving module, the Retinex decomposition problem on the luminance channel further includes:
introducing an auxiliary variable Z, and decomposing Retinex on a brightness channel:
Figure BDA0004093749130000032
split the Retinex decomposition problem into 3 single variable sub-problems:
Figure BDA0004093749130000033
Figure BDA0004093749130000034
Figure BDA0004093749130000035
where Ω (Z) is a regular term for Z, γ is a hyper-parameter, |Z-R|| F The number of the F-norms is the number of the F-norms,
Figure BDA0004093749130000036
for the sub-problem related to the auxiliary variable Z, < ->
Figure BDA0004093749130000037
For the sub-problem related to variable R, +.>
Figure BDA0004093749130000038
For the sub-problem related to variable L, sub-problem +.>
Figure BDA0004093749130000039
And
Figure BDA00040937491300000310
are least squares problems that can be solved by making their first derivative equal to 0.
In a more preferred embodiment, the solving, by the expansion solving module, the Retinex decomposition problem on the luminance channel, then includes:
solving a Retinex decomposition problem on a brightness channel by adopting an iterative updating variable mode, wherein the formula updated by each iteration is as follows:
Figure BDA00040937491300000311
Figure BDA00040937491300000312
Figure BDA00040937491300000313
wherein the method comprises the steps of
Figure BDA0004093749130000041
For convolutional neural network, θ z Is convolutional neural network->
Figure BDA0004093749130000042
Is the number of iterative updates, R k For the R value at the kth iteration update, R k+1 R and Z are the R values at the time of the k+1st iteration update k+1 Z, L at the time of updating for the (k+1) th iteration k For the value L at the time of the kth iteration update, L k+1 The value of L at the time of the k+1st iteration update.
In a more preferred embodiment, the color recovery module includes a backbone network, a downsampled first branch network, and a downsampled second branch network; the backbone network comprises 8 layers of convolution layers and an activation function, wherein the activation function of the first 7 layers of convolution layers is a LeakyReLU, and the activation function of the 8 th layer is a ReLU.
In a more preferred embodiment, the initialization decomposition module is an initialization convolutional neural network, the initialization convolutional neural network includes 4 layers of convolutional layers and an activation function, wherein the activation function of the first 3 layers of convolutional layers is a LeakyReLU, and the activation function of the last layer of convolutional layers is a ReLU.
In another aspect, the present invention provides a computer readable storage medium storing one or more programs executable by one or more processors to implement the steps in the Retinex-based low-light image enhancement method described above.
In another aspect, the present invention further provides a terminal, including: a processor and a memory having stored thereon a computer readable program executable by the processor; the processor, when executing the computer readable program, implements the steps in the Retinex-based low-light image enhancement method as described above.
Compared with the prior art, the invention has the characteristics and beneficial effects that: aiming at complex degradation in the low-light image, the invention adopts a thought of solving one by one to decompose the complex problem into a plurality of relatively simple problems for solving. Considering that the degenerations are not completely independent of each other, the invention roughly divides the degenerations into two major categories of texture degeneration and color degeneration, and the two major categories are respectively modeled and solved in sequence. In order to deal with texture degradation, the invention firstly focuses on brightness adjustment and texture preservation, and provides a Retinex-based brightness processing module LPM which only enhances the brightness channel of a first low-illumination image to obtain a brightness enhancement channel. After texture degradation is processed, the brightness enhancement channel and other color channels are recombined, and a first enhanced image which meets the brightness requirement and has color degradation is obtained. The color recovery module is then utilized to process the color degradation of the first enhanced image to obtain a second enhanced image. The invention enhances brightness while maintaining image texture by applying Retinex theory, and solves texture degradation and color degradation in two steps in turn, and the constructed model can more effectively process degradation of various types compared with other methods, thereby improving the quality of low-illumination images in various aspects of brightness, contrast, color and texture detail.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without creative effort for a person of ordinary skill in the art.
FIG. 1 is a flow chart of a Retinex-based low-light image enhancement method;
FIG. 2 is a schematic diagram of the overall framework of a low-light model;
FIG. 3 is a schematic diagram of a network architecture of SE;
FIGS. 4 and 5 are visual comparisons of the method of the present patent with other methods;
fig. 6 is a schematic structural diagram of a terminal device provided by the present invention.
The drawings are marked: 10-processor, 11-display screen, 12-memory, 13-communication interface, 14-bus.
Detailed Description
The invention provides a Retinex-based low-light image enhancement method, a storage medium and a terminal, and in order to make the purposes, technical schemes and effects of the application clearer and more definite, the application is further described in detail below by referring to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In the present digital age, people's life is not separated from a large amount of multimedia information including images, videos and voices, wherein visual perception information represented by the images and the videos is the most direct and important part of information acquired by people. The digital image is widely applied to security monitoring, traffic management, satellite remote sensing, military detection and wiping, and the fields of human face recognition, semantic segmentation and automatic driving are represented by artificial intelligence, so that most fields of human activities are not separated from the application of the image. In order to fully acquire information in an image, the requirement of people on the image quality is continuously increasing, however, when the image is acquired, noise is inevitably introduced due to the interference of an external environment, so that the image quality is reduced. In a low-illumination environment, due to the lack of enough light, the acquired image has a large-area dark part and serious noise, the visibility and contrast of the image are seriously reduced, and the application effect of subsequent high-level visual tasks is seriously influenced, such as the accuracy of face recognition and license plate recognition is reduced.
In order to obtain high quality images in low light environments, the most straightforward way is to adjust the parameter settings of the physical imaging process to improve the imaging quality. Extending the exposure time of the imaging device allows more light to be obtained, but is very susceptible to blurring caused by shaking. Another way is to increase the film sensitivity, while it can significantly increase the image brightness, it also introduces a lot of noise at the same time, reducing the image detail performance and overall quality. The use of flash lamps or external lights is also a common skill, but the problem of unnatural colors and the like can be caused by uneven light distribution. In summary, it is difficult to obtain an ideal image simply depending on the process of adjusting physical imaging, so it is necessary to construct an intelligent low-light enhancement algorithm to obtain a high-quality picture.
According to different algorithm design concepts, the existing low-light image enhancement algorithms can be divided into 3 types: a method based on distribution mapping, a method based on model optimization and a method based on deep learning.
The distribution mapping-based method improves the pixel value distribution of the image by using the methods of gamma correction, curve transformation such as power function, logarithmic function and the like, histogram equalization, self-adaptive histogram equalization and the like from the consideration of the pixel value distribution of the low-illumination image, thereby improving the brightness of the image. However, the technology which only considers the pixel value distribution and ignores the imaging process and the pixel value distribution in space is a common mode for early processing images in the development of artificial intelligence technology, so that semantic information of the images cannot be effectively distinguished, and the problems of color distortion, obvious noise and unclear textures exist.
Based on the model optimization method, the Retinex theory is often adopted to carry out mathematical modeling on the image, and a model capable of effectively enhancing the image is obtained by designing a target optimization function and iteratively updating model parameters, so that the enhancement of the image is realized. The Retinex theory mimics the human visual system, assuming that picture I consists of the product of the reflection map R and the illumination map L, namely: i=r·l, where the reflection map R represents scene information of an image and the illumination map L represents external illumination information. The difference in imaging between low-light and normal-light images is the external light distribution and intensity, so in Retinex theory, R is the same for both, while L is different. Therefore, the model optimization method based on the Retinex theory obtains R, L through decomposing the image, then enhances L, and finally multiplies the enhanced L with R to realize low-illumination enhancement. However, the method needs to manually add prior regularization terms for the Retinex decomposition and the L enhancement of the image, namely, constraint and optimization directions are set for variables according to experience, so that the generalization performance of the model is poor, a better enhancement result can be obtained only in a specific scene, and the method is difficult to apply in a complex scene. Furthermore, a model needs to be trained for each image, which makes the time cost for processing the images in batches large, and is difficult to apply to real-time image enhancement (such as night view shooting optimization of a camera).
In the big data age, in order to overcome the defects of the traditional method, the concept of a data driving model is applied, the relationship between low-illumination input and enhancement output is established by designing a network structure by utilizing the strong nonlinear mapping capability of deep learning, and the method becomes a mainstream low-illumination image enhancement mode.
Aiming at the technical problems that the current model-based and deep learning-based low-illumination enhancement methods only consider a few degradations or solve the degradation of the complexity and variety, and the better effect can be achieved in one aspect of brightness, color, texture details and noise processing, but the satisfactory effect is difficult to achieve in the aspects at the same time, the invention provides a Retinex-based low-illumination image enhancement method which is a novel supervised low-illumination image enhancement method based on deep learning.
Example 1
As shown in fig. 1 and fig. 2, taking HSV color space as an example, the low-light image enhancement method based on Retinex specifically includes:
s10, inputting a first low-light image, and enhancing a V channel (brightness channel) of the first low-light image by a pre-constructed brightness processing module LPM (Luminance Processing Module) to obtain a brightness enhancing channel Venhanced. Specifically, the first low-light image is firstly converted from an RGB color space to an HSV color space, and a brightness processing module LPM is used for enhancing a brightness channel to obtain a brightness enhancement channel venenhanced. Wherein the first low-light image is a low-light image in the training sample.
The construction method of the brightness processing module comprises the steps of constructing an initialization decomposition module Init (Initialization Block), constructing a Unfolding solution module (Unfolding Block) and constructingA luminance fusion module BIF (Illumination Fusion Block) is built. The initialization decomposition module is used for primarily decomposing a brightness channel of an image into R0 and L0, wherein R0 is an original reflection component of an object, and L0 is an original irradiation component of ambient light. The unfolding solution module is used for solving the Retinex decomposition problem on the brightness channel to obtain accurate R and L. The brightness fusion module is used for adding the additional supplementary new illumination map with uniform and sufficient brightness
Figure BDA0004093749130000081
And sending the obtained product and R into a brightness fusion module BIF, and outputting a reinforced brightness reinforced channel Venhanced.
In this embodiment, the initialization decomposition module is an initialization convolutional neural network Init, where the initialization convolutional neural network Init is composed of 4 convolutional layers and an activation function, where the activation function of the first 3 convolutional layers is a LeakyReLU, and the activation function of the last convolutional layer is a ReLU. It should be noted that, the initialization decomposition module may also be implemented by other conventional initialization methods, such as:
Figure BDA0004093749130000082
the Retinex decomposition problem on the luminance channel can be generally described as:
Figure BDA0004093749130000083
wherein II V-R.L II F The number of the F-norms is the number of the F-norms,
Figure BDA0004093749130000084
reconstruction term, V is brightness channel, R is reflection component of object, L is ambient light irradiation component, Ω 1 (R) and Ω 2 (L) is a canonical term for R and L, respectively, α 1 And alpha 2 Is a super parameter.
Since the normal image Inormal contains sufficient scene information and the reflection map R represents the scene information in the image, the present invention estimates R by supervised learning when Retinex on the luminance channel is decomposed, and then obtains L using formula (1), formula (1) is:
L=V/R (1)
therefore, the invention discards the regularization term on L to reduce the computational effort and shorten the run time by nearly half. The Retinex decomposition problem on the luminance channel at this time is:
Figure BDA0004093749130000085
to more easily solve this optimization problem, the present invention introduces the auxiliary variable Z in the Retinex decomposition on the luminance channel, where the Retinex decomposition problem on the luminance channel can be translated into:
Figure BDA0004093749130000086
gamma in formula (4) is a superparameter, omega (Z) is a regular term of Z, |Z-R|| F Is F norm;
and then the formula (4) can be split into 3 single variable sub-problems:
Figure BDA0004093749130000091
Figure BDA0004093749130000092
Figure BDA0004093749130000093
wherein the method comprises the steps of
Figure BDA0004093749130000094
For the sub-problem related to the auxiliary variable Z, < ->
Figure BDA0004093749130000095
Is the sum variable RRelated sub-problems, < >>
Figure BDA0004093749130000096
Is a sub-problem related to the variable L. Omega (Z) is considered to be designed in the conventional method, and the convolutional neural network is adopted in the invention>
Figure BDA0004093749130000097
Omega (Z) is learned from the data so that it can be adapted to different scenarios. Child problem->
Figure BDA0004093749130000098
And->
Figure BDA0004093749130000099
Are least squares problems that can be solved by making their first derivative equal to 0.
The invention solves the Retinex decomposition problem on the brightness channel by adopting an iterative updating variable mode, and the formula updated by each iteration is as follows:
Figure BDA00040937491300000910
Figure BDA00040937491300000911
Figure BDA00040937491300000912
wherein the method comprises the steps of
Figure BDA00040937491300000913
For convolutional neural network, θ z Is convolutional neural network->
Figure BDA00040937491300000914
Is the number of iterative updates, R k For the R value at the kth iteration update, R k+1 For the (k+1) th iterationR value, Z at update k+1 Z, L at the time of updating for the (k+1) th iteration k For the value L at the time of the kth iteration update, L k+1 The value of L at the time of the k+1st iteration update. In a specific embodiment, k is set to 3, at which time both runtime and model effects can be achieved. The more iterative updates, the more accurate the estimated R, L, but the greater the time overhead. One skilled in the art can set k to other specific values depending on the experimental situation.
The luminance channel of the first low-light image is decomposed into R and L after passing through the initialization decomposition module and the expansion solution module. Since L is related to the illumination when the image is taken, the first low-light image may be independent, uncorrelated with the normal-illuminated L. Therefore, unlike other existing methods, the invention does not map L from low illumination to normal illumination, but additionally supplements a new illumination map with uniform brightness and sufficient brightness
Figure BDA00040937491300000915
And sending the obtained product and R to a luminance fusion module BIF to output a reinforced luminance reinforced channel Venhanced, wherein the process can be described as follows:
Figure BDA00040937491300000916
where RK is R, θ for the last iteration in the expansion solution module IF Is a parameter of the neural network BIF. To ensure additional supplementary new illumination patterns
Figure BDA0004093749130000101
The invention calculates the mean value omega of the illumination map Lnomal of the normal illumination image, and then expands omega to be the same as RK in size so as to obtain an additional new illumination map +.>
Figure BDA0004093749130000102
The calculation formula of ω is:
Figure BDA0004093749130000103
the luminance fusion module BIF includes 5 layers of convolution+activation functions, where the first 4 layers of activation functions are LeakyReLU and the last layer of activation functions are ReLU.
S20, recombining the brightness enhancement channel Venhanced with other color channels to obtain a first enhanced image I which meets the brightness requirement and has color degradation obe The other color channels refer to other color channels than the luminance channel. Specifically, taking HSV color space as an example, the brightness enhancement channels venenhanced and H, V color channels are recombined to obtain a first enhanced image I meeting brightness requirements and having color degradation obe
S30, performing CRM processing on the color degradation of the first enhanced image by the pre-constructed color recovery module to obtain a second enhanced image Ienhanced.
The CRM is formed by a color recovery network CRNet, and the color recovery module includes a backbone network, a downsampling first branch network, and a downsampling second branch network. The method comprises the steps that a backbone network is composed of 8 layers of convolution layers and an activation function, wherein the activation function of the first 7 layers of convolution layers is a LeakyReLU, the activation function of the 8 th layer is a ReLU, the 1 st layer of convolution layers of the backbone network and the 7 th layer of convolved feature images are stacked and spliced and participate in the next convolution, the 2 nd layer of convolution layers of the backbone network and the 6 th layer of convolved feature images are stacked and spliced and participate in the next convolution, and the 3 rd layer of convolution layers of the backbone network and the 5 th layer of convolved feature images are stacked and spliced and participate in the next convolution; the feature images after the layer 1 convolution layer and the layer 3 convolution of the downsampling first branch network are stacked and spliced and participate in the next convolution.
And after the main network convolves for 2 times, downsampling the obtained feature map and entering a downsampling first branch network, wherein the downsampling first branch network carries out upsampling on the obtained feature map after passing through a 4-layer convolution layer and an activation function, and the downsampling first branch network is spliced with the feature map obtained by the 6 th convolution of the main network and participates in the next convolution.
The downsampling first branch network carries out downsampling after one convolution and enters the downsampling second branch network, the downsampling second branch network carries out upsampling after 2 layers of convolution layers and activation functions, and the downsampling second branch network is spliced with a characteristic diagram obtained by the 3 rd convolution of the downsampling first branch network and participates in the next convolution.
In addition, the feature graphs spliced after the downsampling of the first branch network and the downsampling of the second branch network adopt SE modules for feature fusion, and the structure of the SE modules is shown in figure 3. The downsampling first branch network and the downsampling second branch network can effectively enlarge the perception field of view of the feature map, and are beneficial to recovering image details. And by introducing the SE module, the image features can be effectively fused, and the color recovery of the whole image is facilitated. Details of the CRNet network are shown in table 1.
Table 1 details of crnet networks
Figure BDA0004093749130000111
Figure BDA0004093749130000121
And S40, performing supervised learning on the second enhanced image by using the normal image, and updating parameters of the brightness processing module and the color recovery module to construct a low-light model applicable to the full-scene low-light image.
The loss function of the training initialization decomposition module is as follows:
Figure BDA0004093749130000122
wherein En is an identity matrix, a second constraint term
Figure BDA0004093749130000123
Is to prevent L 0 All 0, t is the transpose operation of the matrix.
The expansion solving module and the brightness fusion module are trained together, and the loss function is as follows:
Figure BDA0004093749130000124
wherein phi (R) K ) For features in RK extracted by VGG19 neural network feature extractor, phi (V enhanced ) For features in Venhanced extracted by VGG19 neural network feature extractor, SSIM (V enhanced ,V normal ) For structural similarity of Venhanced and Vnormal, γ is the hyper-parameter, ZK is the Z of the last iteration in the expansion solution module, and Vnormal is the luminance normal channel.
The loss function when the color recovery module is trained is as follows:
Figure BDA0004093749130000131
inormal in equation (15) is a normal image,
Figure BDA0004093749130000132
for gradient operators, involving horizontal and vertical directions, SSIM (I enhanced ,I normal ) Structural similarity is Ienhanced and Inormal.
S50, inputting the second low-light image into a pre-trained low-light model to obtain an enhanced target image. The second low-light image is an image in the input sample.
By applying the Retinex theory, the brightness is enhanced while the image texture is maintained, and the texture degradation and the color degradation are solved in two steps in turn.
Fig. 4 and 5 are comparisons of the enhancement method of the present invention with other methods. Other methods may perform well in some aspects of brightness, texture, noise suppression and color, but the method of the invention not only can effectively improve brightness, contrast and suppress noise on the premise of keeping the texture clear, but also is more natural in color recovery.
It should be noted that, the brightness adjustment module in the present invention can be used for brightness channels in other color spaces, such as Lab, YUV, YCbCr, besides brightness channels in HSV color space.
Example two
Based on the Retinex-based low-light image enhancement method described above, the present invention provides a computer-readable storage medium storing one or more programs executable by one or more processors to implement the steps in the Retinex-based low-light image enhancement method described above.
Example III
Based on the Retinex-based low-light image enhancement method, the invention also provides a terminal, as shown in fig. 6, which at least comprises a processor 10; a display screen 11; and a memory 12, which may also include a communication interface 13 and a bus 14. Wherein the processor 10, the display 11, the memory 12 and the communication interface 13 may communicate with each other via a bus 14. The display screen 11 is configured to display a user guidance interface preset in the initial setting mode. The communication interface 13 may transmit information. The processor 10 may invoke logic instructions in the memory 12 to perform the methods of the embodiments described above.
Further, the logic instructions in the memory 12 described above may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand alone product.
The memory 12, as a computer readable storage medium, may be configured to store a software program, a computer executable program, such as program instructions or modules corresponding to the methods in the embodiments of the present disclosure. The processor 10 performs functional applications and data processing, i.e. implements the methods of the embodiments described above, by running software programs, instructions or modules stored in the memory 12.
Memory 12 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of the terminal device, etc. In addition, the memory 12 may include high-speed random access memory, and may also include nonvolatile memory. For example, a plurality of media capable of storing program codes such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or a transitory storage medium may be used.
In addition, the specific processes that the storage medium and the plurality of instruction processors in the terminal device load and execute are described in detail in the above method, and are not stated here.
In summary, the invention aims at complex degradation in low-light images, adopts an approach of solving one by one, and decomposes a complex problem into a plurality of relatively simple problems for solving. Considering that the degenerations are not completely independent of each other, the invention roughly divides the degenerations into two major categories of texture degeneration and color degeneration, and the two major categories are respectively modeled and solved in sequence. In order to deal with texture degradation, the invention firstly focuses on brightness adjustment and texture preservation, and provides a Retinex-based brightness processing module LPM which only enhances the brightness channel of a first low-illumination image to obtain a brightness enhancement channel. After texture degradation is processed, the brightness enhancement channel and other color channels are recombined, and a first enhanced image which meets the brightness requirement and has color degradation is obtained. The color recovery module is then utilized to process the color degradation of the first enhanced image to obtain a second enhanced image. The invention enhances brightness while maintaining image texture by applying Retinex theory, and solves texture degradation and color degradation in two steps in turn, and the constructed model can more effectively process degradation of various types compared with other methods, thereby improving the quality of low-illumination images in various aspects of brightness, contrast, color and texture detail.
According to the invention, constraint terms omega (R) of R in the classical Retinex decomposition problem are changed from traditional artificial design into self-adaptive fitting of a convolutional neural network, and L can be obtained by using L=V/R after R is estimated, so that the invention discards regular terms on L, reduces the calculated amount and shortens the running time by nearly half.
The regular terms about R, L in the traditional model-based optimization method are all designed manually and are difficult to adapt to complex scenes. The invention combines deep learning and model-based optimization, solves the Retinex decomposition problem by adopting a unfolding solution mode (iterative update variable), learns a self-adaptive regularization term from data by using a convolutional neural network, and can greatly improve the generalization performance of the model.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (10)

1. A Retinex-based low-light image enhancement method, comprising:
inputting a first low-light image, and enhancing a brightness channel of the first low-light image by a pre-constructed brightness processing module to obtain a brightness enhancement channel;
recombining a brightness enhancement channel with other color channels to obtain a first enhanced image which meets the brightness requirement and has color degradation, wherein the other color channels are other color channels except the brightness channel;
the pre-constructed color recovery module processes the color degradation of the first enhanced image to obtain a second enhanced image;
performing supervised learning on the second enhanced image by using the normal image, and updating parameters of the brightness processing module and the color recovery module to construct a low-light model applicable to the full-scene low-light image;
and inputting the second low-light image into a pre-trained low-light model to obtain an enhanced target image.
2. The Retinex-based low-light image enhancement method according to claim 1, wherein the inputting the first low-light image, the pre-constructed luminance processing module enhancing a luminance channel of the first low-light image to obtain a luminance enhancement channel, includes:
constructing an initialization decomposition module, and primarily decomposing a brightness channel of an image into R through the initialization decomposition module 0 And L 0 Wherein R is 0 For roughly estimating the reflected component of the object, L 0 Illuminating the component with ambient light for a rough estimate;
constructing an unfolding solution module, and solving a Retinex decomposition problem on a brightness channel through the unfolding solution module to obtain accurate R and L, wherein R is a reflection component of an object, and L is an ambient light irradiation component;
constructing a brightness fusion module, and fusing the reflection component of the accurate object obtained by constructing an unfolding solution module with the additional supplementary new illumination map through the brightness fusion module to obtain a brightness enhancement channel; wherein the additional new illumination map is an image with uniform brightness and sufficient brightness.
3. The Retinex-based low-light image enhancement method of claim 2, wherein the solving, by the expansion solving module, a Retinex decomposition problem on a luminance channel, comprises:
retinex decomposition on luminance channel:
Figure FDA0004093749110000011
wherein II V-R.L II F The number of the F-norms is the number of the F-norms,
Figure FDA0004093749110000021
reconstruction term, V is luminance channel, Ω 1 (R) and Ω 2 (L) is a canonical term for R and L, respectively, α 1 And alpha 2 Is a super parameter.
4. A Retinex-based low-light image enhancement method according to claim 3, wherein said solving, by said expansion solving module, a Retinex decomposition problem on a luminance channel, further comprises:
estimating R through supervised learning, and calculating an ambient light irradiation component L according to the estimated R:
L=V/R
and decomposing Retinex on the brightness channel according to the obtained ambient light irradiation component L:
Figure FDA0004093749110000022
where α is a hyper-parameter and Ω (R) is a canonical term for R.
5. The Retinex-based low-light image enhancement method of claim 4, wherein the solving, by the expansion solving module, a Retinex decomposition problem on a luminance channel, further comprises:
introducing an auxiliary variable Z, and decomposing Retinex on a brightness channel:
Figure FDA0004093749110000023
split the Retinex decomposition problem into 3 single variable sub-problems:
Figure FDA0004093749110000024
Figure FDA0004093749110000025
Figure FDA0004093749110000026
where Ω (Z) is a regular term for Z, γ is a hyper-parameter, |Z-R|| F The number of the F-norms is the number of the F-norms,
Figure FDA0004093749110000027
for the sub-problem related to the auxiliary variable Z, < ->
Figure FDA0004093749110000028
For the sub-problem related to variable R, +.>
Figure FDA0004093749110000029
For the sub-problem related to variable L, sub-problem +.>
Figure FDA00040937491100000210
And->
Figure FDA00040937491100000211
Are least squares problems that can be solved by making their first derivative equal to 0.
6. The Retinex-based low-light image enhancement method of claim 2, wherein the solving, by the expansion solving module, the Retinex decomposition problem on the luminance channel, then comprises:
solving a Retinex decomposition problem on a brightness channel by adopting an iterative updating variable mode, wherein the formula updated by each iteration is as follows:
Figure FDA0004093749110000031
Figure FDA0004093749110000032
Figure FDA0004093749110000033
wherein the method comprises the steps of
Figure FDA0004093749110000034
For convolutional neural network, θ z Is convolutional neural network->
Figure FDA0004093749110000035
Is the number of iterative updates, R k For the R value at the kth iteration update, R k+1 R and Z are the R values at the time of the k+1st iteration update k+1 Z, L at the time of updating for the (k+1) th iteration k For the value L at the time of the kth iteration update, L k+1 The value of L at the time of the k+1st iteration update.
7. The Retinex-based low-light image enhancement method of claim 2, wherein the color recovery module comprises a backbone network, a downsampled first branch network, and a downsampled second branch network; the backbone network comprises 8 layers of convolution layers and an activation function, wherein the activation function of the first 7 layers of convolution layers is a LeakyReLU, and the activation function of the 8 th layer is a ReLU.
8. The Retinex-based low-light image enhancement method of claim 2, wherein: the initialization decomposition module is an initialization convolutional neural network, the initialization convolutional neural network comprises 4 layers of convolutional layers and an activation function, wherein the activation function of the first 3 layers of convolutional layers is LeakyReLU, and the activation function of the last layer of convolutional layers is ReLU.
9. A computer readable storage medium storing one or more programs executable by one or more processors to implement the steps in the Retinex-based low-light image enhancement method of any of claims 1-8.
10. A terminal, characterized by comprising: a processor and a memory having stored thereon a computer readable program executable by the processor; the processor, when executing the computer readable program, implements the steps of the Retinex-based low-light image enhancement method according to any of claims 1 to 8.
CN202310159846.9A 2023-02-17 2023-02-17 Retinex-based low-light image enhancement method, storage medium and terminal Pending CN116071266A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310159846.9A CN116071266A (en) 2023-02-17 2023-02-17 Retinex-based low-light image enhancement method, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310159846.9A CN116071266A (en) 2023-02-17 2023-02-17 Retinex-based low-light image enhancement method, storage medium and terminal

Publications (1)

Publication Number Publication Date
CN116071266A true CN116071266A (en) 2023-05-05

Family

ID=86178459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310159846.9A Pending CN116071266A (en) 2023-02-17 2023-02-17 Retinex-based low-light image enhancement method, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN116071266A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118279570A (en) * 2024-06-03 2024-07-02 中国铁建电气化局集团第二工程有限公司 Overhead contact system bolt inspection method based on improved Fast rcnn algorithm

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118279570A (en) * 2024-06-03 2024-07-02 中国铁建电气化局集团第二工程有限公司 Overhead contact system bolt inspection method based on improved Fast rcnn algorithm

Similar Documents

Publication Publication Date Title
Li et al. Low-light image and video enhancement using deep learning: A survey
Liu et al. Survey of natural image enhancement techniques: Classification, evaluation, challenges, and perspectives
Wang et al. Joint iterative color correction and dehazing for underwater image enhancement
CN113129236B (en) Single low-light image enhancement method and system based on Retinex and convolutional neural network
KR102095443B1 (en) Method and Apparatus for Enhancing Image using Structural Tensor Based on Deep Learning
Zhang et al. IID-MEF: A multi-exposure fusion network based on intrinsic image decomposition
Zheng et al. Unsupervised underexposed image enhancement via self-illuminated and perceptual guidance
Rasheed et al. LSR: Lightening super-resolution deep network for low-light image enhancement
WO2024217182A1 (en) Image enhancement method and apparatus, electronic device, and storage medium
Tang et al. A local flatness based variational approach to retinex
Lei et al. A novel intelligent underwater image enhancement method via color correction and contrast stretching✰
Li et al. Flexible piecewise curves estimation for photo enhancement
Liu et al. Color enhancement using global parameters and local features learning
CN115035011B (en) Low-illumination image enhancement method of self-adaption RetinexNet under fusion strategy
CN116109509A (en) Real-time low-illumination image enhancement method and system based on pixel-by-pixel gamma correction
CN116071266A (en) Retinex-based low-light image enhancement method, storage medium and terminal
CN116188339A (en) Retinex and image fusion-based scotopic vision image enhancement method
CN115760630A (en) Low-illumination image enhancement method
CN115797205A (en) Unsupervised single image enhancement method and system based on Retinex fractional order variation network
CN117391987A (en) Dim light image processing method based on multi-stage joint enhancement mechanism
CN111161189A (en) Single image re-enhancement method based on detail compensation network
CN114283101B (en) Multi-exposure image fusion unsupervised learning method and device and electronic equipment
CN113409225B (en) Retinex-based unmanned aerial vehicle shooting image enhancement algorithm
US20230186612A1 (en) Image processing methods and systems for generating a training dataset for low-light image enhancement using machine learning models
CN115689871A (en) Unsupervised portrait image color migration method based on generation countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination