CN103313068A - White balance corrected image processing method and device based on gray edge constraint gray world - Google Patents

White balance corrected image processing method and device based on gray edge constraint gray world Download PDF

Info

Publication number
CN103313068A
CN103313068A CN2013102058572A CN201310205857A CN103313068A CN 103313068 A CN103313068 A CN 103313068A CN 2013102058572 A CN2013102058572 A CN 2013102058572A CN 201310205857 A CN201310205857 A CN 201310205857A CN 103313068 A CN103313068 A CN 103313068A
Authority
CN
China
Prior art keywords
image
white balance
gray
world
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013102058572A
Other languages
Chinese (zh)
Other versions
CN103313068B (en
Inventor
张茂军
熊志辉
赖世铭
谭鑫
陈捷
王博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Vision Splend Photoelectric Technology Co ltd
Original Assignee
Shanxi Green Optoelectronic Industry Science And Technology Research Institute (co Ltd)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi Green Optoelectronic Industry Science And Technology Research Institute (co Ltd) filed Critical Shanxi Green Optoelectronic Industry Science And Technology Research Institute (co Ltd)
Priority to CN201310205857.2A priority Critical patent/CN103313068B/en
Publication of CN103313068A publication Critical patent/CN103313068A/en
Application granted granted Critical
Publication of CN103313068B publication Critical patent/CN103313068B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Color Television Image Signal Generators (AREA)
  • Processing Of Color Television Signals (AREA)
  • Color Image Communication Systems (AREA)

Abstract

The invention relates to the technical field of digital image processing, in particular to a white balance corrected image processing method and device based on gray edge constraint gray world, and solves the problems that the existing gray world algorithm is high in correctness in most scenes, but is unstable in partial scenes (such as large monochrome objects) and the gray edge algorithm is high in robustness but low in correctness. The method comprises the following steps of solving the gray edge constraint by the gray edge algorithm, limiting the solving space into the constrained range of the gray edge, and ensuring the basic robustness of the algorithm; and solving the exact solution in the limited solving space by the gray world algorithm. The disclosed algorithm avoids the disadvantages of two basic algorithms, i.e. the gray edge and the gray world, makes full use of the advantages of the two algorithms, can fast solve the exact solution, and is strong in robustness. The method and the device provided by the invention are reasonable in design.

Description

White balance correction image processing method and device based on gray edge constraint gray world
Technical Field
The invention relates to the technical field of digital image processing, in particular to an image processing method and device based on white balance correction of a gray edge constraint gray world.
Background
Color is the basis of an image and is also the visual information of the image. On one hand, color information of an image is collected for human viewing, and on the other hand, the color information of the image is widely used in computer vision research, such as feature extraction, object recognition, image retrieval, and the like, as an important clue. However, under different illumination conditions, the colors reflected by the object are different, and the purpose of white balance is to eliminate the influence of different illumination and restore the real color of the object under standard illumination.
Image illumination estimation is the first step of white balance calculation, and is often the most important and difficult step. The result of the illumination estimation can often be directly used to correct the color shift of the image, for example, in the white balance of the camera, the gain value of each channel of red, green and blue of the camera is adjusted by directly using the color shift of the illumination.
The existing illumination estimation method has two classical algorithms of a gray world and a gray edge. The grey world assumption holds that: the average reflection of all physical surfaces in the scene is achromatic (grey). Under this assumption, the statistical mean of each color channel of the image is achromatic under white illumination, i.e. the mean of each channel is equally large, and if there is a difference in the statistical mean between different channels of the image, this difference must be due to ambient light. Based on this assumption, there is a gray world approach. The gray world method is simple to calculate, but the effect is not ideal.
The gray edge hypothesis considers: the mean of the differences in reflection for all physical surfaces in the scene is achromatic (grey). The gray edge method is based on the assumption that the mean of the modes of the first-order or second-order gradients of each channel image is calculated first, then the image illumination is estimated by using the difference of the mean values of each channel, the images are blurred by using gaussian kernels with different sigma in the calculation process to calculate information of different scales of the images, and a Minkowski-norm (Minkowski-norm) is also introduced to obtain a generally-meaning gray edge method:
( ∫ | ∂ n f c σ ( x ) ∂ x n | p dx ) 1 / p = ke c n , p , σ , c ∈ { r , g , b }
wherein f isc(x) C-channel image of color image f (x), x being the image two-dimensional coordinates,
Figure BDA00003266588800022
representing the image after gaussian convolution, n =0, 1, 2 representing the order of the image gradient, p being Minkowski-norm,
Figure BDA00003266588800023
then the estimated illumination, k is a normalization constant such that e | = 1. It can be seen that the formula unifies the traditional gray world method, the maximum method, the Shade of Grey algorithm, and the gray edge method into one framework.
The gray edge approach has several limitations. First, although the gray edge method can be implemented by only a few simple lines of program codes, the computation process involves a gaussian kernel convolution process, which seriously affects the computation speed of the algorithm, such as the experimental implementation for the second-order gray edge method, 4<σ<The 7 effect is good, and if σ =4, the convolution kernel size is 25 × 25, and even if the convolution decomposition in the x direction and the y direction is performed, it is necessary to perform 2 times of convolution of 1 × 25 size, and this count is usedThe calculation amount is equivalent to the calculation amount of the gray world algorithm for 50 times. Secondly, if the values of the sigma and p parameters related to the method are inappropriate, a good effect is difficult to obtain, especially when no prior information exists in the input image. Thirdly, the method has complex and large calculation amount of image gradient, for example, the calculation formula of first order gradient is &PartialD; 1 f ( x ) &PartialD; x 1 = ( &PartialD; 1 + 0 f ( x ) &PartialD; x 1 y 0 ) 2 + ( &PartialD; 0 + 1 f ( x ) &PartialD; x 0 y 1 ) 2 , The first-order gradient in the x direction needs to be calculated firstly, and then the first-order gradient in the y direction needs to be calculated, square and square root operation is also involved, and the calculation of the second-order gradient is more complex. The gray edge method is simple in principle and greatly improved in effect, but the calculation process of the gray edge method relates to Gaussian convolution, the time complexity is high, and the selection of the size of a convolution kernel is lack of specific guidance.
In summary, the gray world algorithm has high accuracy in most scenes, but it is very unstable in some scenes (such as large area monochromatic objects); the gray edge algorithm is robust, but its accuracy is not high.
Disclosure of Invention
The invention provides an image processing method based on white balance correction of a gray edge constraint gray world, aiming at solving the problems of the existing gray world algorithm and the gray edge algorithm.
The invention is realized by adopting the following technical scheme:
a white balance correction image processing method based on gray edge constraint gray world comprises the following steps:
after an image is collected by an image sensor, converting an optical signal into an electric signal, transmitting the electric signal to a Bayer image processing unit in a Bayer image mode, and outputting a Bayer image;
inputting the Bayer image output in the step (I) into a gray edge statistical module and a white balance coefficient calculation module on one hand, and performing a gray edge algorithm to obtain gray edge white balance coefficients GEgainR and GEgainB;
(III) on the other hand, the Bayer image output in the step (I) is subjected to white balance correction processing through a white balance correction module according to the gray edge white balance coefficient obtained in the step (II), and is subjected to demosaicing processing through a demosaicing module;
(IV) sequentially passing the image output in the step (III) through a grey world statistical module and a white balance coefficient calculation module to perform grey world algorithm to obtain grey world white balance coefficients GSgainR and GSgainB;
and (V) utilizing the gray edge white balance coefficient obtained in the step (II) to constrain the gray world white balance coefficient obtained in the step (IV), and obtaining a white balance coefficient finally used for image correction through a white balance coefficient calculation module: the method comprises the following specific steps:
setting two white balance threshold parameters of limit1 and limit2 (the values of limit1 and limit2 can be determined by debugging of a person skilled in the art), firstly calculating a difference value absgainR between GEgainR and GSgainR, namely absgainR = | GEgainR-GSgainR |, and if absgainR is not greater than limit1, indicating that a white balance coefficient of a gray world is close to a white balance coefficient of a gray edge, adopting a white balance accurate solution obtained by a gray world algorithm; if the absgainR is greater than limit2, indicating that the gray world white balance coefficient is far away from the gray edge white balance coefficient, then obtaining a white balance solution by using a gray edge algorithm; and if the other conditions are in the critical region, taking the weighted average of the two as the solution of white balance by adopting the following formula:
Figure BDA00003266588800041
wherein, gainR _ new, gainB _ new are the r, b channel white balance coefficients finally used for correction, and the white balance coefficient of the g channel is always set to 1;
(VI), according to the white balance coefficient obtained in the step (V), returning to the step (III) to sequentially perform white balance correction processing and demosaicing processing on the Bayer image again; then, after color image processing, the image enters a device such as a compression/display/storage device, and the image processing is completed.
Based on the method, the method has the core that gray edge constraint is obtained by solving with a gray edge algorithm, the space of the solution is limited in the range of the gray edge constraint, and the basic robustness of the algorithm is ensured; and solving an accurate solution in a limited solution space by utilizing a gray world algorithm. Both the gray edge algorithm and the gray world algorithm can be performed by using conventional algorithms in the prior art.
Preferably, the gray edge algorithm in the step (ii) adopts two more practical algorithms, which are respectively a gray edge algorithm based on image block gradients or a gray edge algorithm based on image transverse mean smooth down-sampling and transverse first-order difference, to obtain a Bayer image illumination estimation value e, and then gray edge white balance coefficients GEgainR and GEgainB are obtained according to the Bayer image illumination estimation value e; the method comprises the following specific steps:
gray edge algorithm based on image block gradient: firstly, uniformly dividing an image into Bw Bh blocks, wherein the size of each block is s, and averaging all pixels inside each block to obtain a pixel value
Figure BDA00003266588800051
Thus obtaining a small image with the size of Bw Bh; then, calculating a second-order gradient of the image by adopting a discrete Laplacian operator in the following formula (1), and calculating the average gradient of each channel to obtain a Bayer image illumination estimation value e;
&Integral; ( Lap &CircleTimes; f c s ( x ) ) dx = ke c , c &Element; { r , g , b } , - - - ( 1 )
wherein,
Figure BDA00003266588800053
representing a small image of the c-channel image after s-s block averaging operation; lap is a discrete Laplacian: Lap = 1 1 1 1 - 8 1 1 1 1 ;
gray edge algorithm based on image transverse mean smooth down-sampling and transverse first-order difference: setting an N-order down-sampling filtering template as follows:
Figure BDA00003266588800061
the image after the horizontal mean smooth down-sampling is
f N ( m &prime; , n ) = &Sigma; m = ( N - 1 ) m &prime; Nm &prime; - 1 f ( m , n )
Wherein, N belongs to [1, width (f (m, N)) ], namely the down-sampling template is larger than 1 and smaller than the image width;
the gradient solving mode of transverse first-order difference is adopted, and the difference template is as follows:
T=[1-1]
the difference image is a convolution of the down-sampled image and the difference template
Figure BDA00003266588800063
For minkowski norm values, take the value of p to 1, i.e. no minkowski norm is introduced;
an illumination estimate for the scene can be derived as:
&Integral; f T N ( x ) dx = ke N
wherein, f T N ( x ) = | f N &CircleTimes; T | ,
thereby obtaining a Bayer image illumination estimation value e;
then, according to the illumination estimation value e = [ e ]r,eg,eb]TCorrection of Bayer images to standard illumination e &prime; = [ e &prime; r , e &prime; g , e &prime; b ] T = [ 1 3 , 1 3 , 1 3 ] T Output image f' (x):
f &prime; c ( x ) = e &prime; c e c f c ( x ) , c &Element; { r , g , b }
then the gray edge white balance coefficient GEgainR is equal toFor the same reason, GegainB is equal to
Figure BDA00003266588800069
I.e. a gray edge constraint is obtained.
Preferably, the gray world algorithm in step (iv) employs the following method:
firstly, the number of white points of each frame of the output image of the step (III) and the accumulated value sigma f of the white points r, g and b in each frame of the image are calculatedr(x),∑fg(x),∑fb(x) When the following three conditions are satisfied simultaneouslyConsider the white point:
Figure BDA00003266588800071
the conditions 1, 2 and 3 are sequentially from top to bottom; wherein condition 1 indicates that only when the value of the g channel of the pixel is between Gsmin and Gsmax, the point is counted to remove the influence of the extreme darkness and the extreme lightness; conditions 2 and 3 indicate that only if the absolute value of the difference between the r, b and g channel values of the pixel is less than GSration and fg(x) When the product of (a) and (b) is obtained, the point is considered as a white point; only when the three conditions are met simultaneously, the point is used for grey world statistics; the Gsmin, Gsmax and GSration are determined by the debugging of the technical personnel in the field;
when the number of white dots of one frame of image exceeds a set threshold value, the statistical result of the frame of image can be used for calculating a gray world white balance coefficient; averaging the results of the multiple frames of images to obtain a final output gray world white balance coefficient, wherein the formula for calculating the gray world white balance coefficient is as follows:
Figure BDA00003266588800072
and the gainR and gainB are white balance coefficients currently adopted by the r channel and the b channel to obtain the white balance coefficients GSgainR and GSgainB of the gray world.
Based on the above process, as shown in fig. 2:
(1) solving by a gray edge algorithm to obtain gray edge constraints: firstly, correcting the image by using a gray edge algorithm to obtain a gray edge white balance coefficient. The invention provides two more practical gray edge algorithms, the first algorithm is a gray edge algorithm based on image block gradient, and a small image is obtained by uniformly partitioning an image and averaging all pixels inside each block to obtain a pixel value; and then, calculating a second-order gradient of the image by adopting a discrete Laplacian operator, and calculating the average gradient of each channel, thereby obtaining an estimated value of image illumination. The second method is a simplified algorithm based on image transverse mean smooth downsampling and transverse first-order difference, Gaussian smoothing is achieved by adopting a filtering method of transverse mean smooth downsampling, and then high-order derivatives are obtained through gradient calculation of the transverse first-order difference, so that an estimated value of image illumination can be obtained. After the image illumination estimated value is obtained, white balance correction processing can be carried out on the image, and meanwhile gray edge constraint, namely a gray edge white balance coefficient, can also be obtained.
(2) Solving an accurate solution in a range constrained by gray edges by using a gray world algorithm: and calculating a gray world white balance coefficient by using a gray world method based on the image subjected to the gray edge white balance correction, and finally obtaining a white balance coefficient finally used for image correction by combining the gray edge white balance coefficient and the gray world white balance coefficient. If the grey world white balance coefficient is close to the grey edge white balance coefficient, adopting the grey world white balance coefficient; if the distance is far away, adopting a gray edge white balance coefficient; otherwise, if the two conditions are in the critical region, the weighted average white balance coefficient of the two conditions is adopted.
The method has the following advantages:
(1) the calculation amount is small. For the gray edge statistical module and the gray world statistical module, the whole image needs to be counted, but only some simple judgment and calculation are needed, so that the resource consumption is less; for the white balance coefficient calculation module, although some logic relatively complex calculations are performed, the data amount is small, and therefore, the resource consumption is small.
(2) The accuracy of white balance correction is high.
Further, a white balance correction image processing apparatus that constrains a gray world based on a gray edge, characterized in that: the method comprises the following steps:
an image sensor that outputs an image to a Bayer image processing unit in a Bayer image pattern; the Bayer image processing unit outputs a Bayer image;
on one hand, the output Bayer image is subjected to gray edge algorithm sequentially through a gray edge statistical module and a white balance coefficient calculation module to obtain gray edge white balance coefficients GEgainR and GEgainB, and the gray edge white balance coefficients GEgainR and GEgainB are output to a white balance correction module;
on the other hand, the output Bayer image is subjected to white balance correction processing through the white balance correction module according to the obtained gray edge white balance coefficient and is output to the demosaicing module; the demosaicing module carries out demosaicing processing and outputs a demosaiced image;
on one hand, the output demosaiced image is subjected to gray world algorithm sequentially through a gray world statistical module and a white balance coefficient calculation module to obtain gray world white balance coefficients GSgainR and GSgainB, and the gray world white balance coefficients GSgainR and GSgainB are output to the white balance coefficient calculation module;
the white balance coefficient calculation module obtains a white balance coefficient finally used for image correction by utilizing a gray world white balance coefficient obtained by the obtained gray edge white balance coefficient constraint, and outputs the white balance coefficient to the white balance correction module; the method comprises the following specific steps: setting two white balance threshold parameters of limit1 and limit2 (the values of limit1 and limit2 can be determined by debugging of a person skilled in the art), firstly calculating a difference value absgainR between GEgainR and GSgainR, namely absgainR = | GEgainR-GSgainR |, and if absgainR is not greater than limit1, indicating that a white balance coefficient of a gray world is close to a white balance coefficient of a gray edge, adopting a white balance accurate solution obtained by a gray world algorithm; if the absgainR is greater than limit2, indicating that the gray world white balance coefficient is far away from the gray edge white balance coefficient, then obtaining a white balance solution by using a gray edge algorithm; and if the other conditions are in the critical region, taking the weighted average of the two as the solution of white balance by adopting the following formula:
Figure BDA00003266588800101
wherein, gainR _ new and gainB _ new are the r and b channel white balance coefficients finally used for correction, and the white balance coefficient of the g channel is alwaysIs set to 1;
the white balance correction module performs white balance correction processing on the Bayer image again according to a white balance coefficient finally used for image correction and outputs the Bayer image to the demosaicing module; the output demosaicing module outputs the image to a color image processing unit; the color image processing unit processes the image and outputs the image to a compression/display/storage device or the like.
Preferably, the gray edge algorithm performed by the gray edge statistical module and the white balance coefficient calculation module adopts a gray edge algorithm based on an image block gradient or a gray edge algorithm based on image transverse mean smooth down-sampling and transverse first-order difference to obtain a Bayer image illumination estimation value e, and then gray edge white balance coefficients GEgainR and GEgainB are obtained according to the Bayer image illumination estimation value e, so that gray edge constraint is obtained.
Preferably, the gray world algorithm performed by the gray world statistics module and the white balance coefficient calculation module adopts the following method:
firstly, the number of white points of each frame of the demosaiced mosaic image and the accumulated value sigma f of the white points r, g and b in each frame of the demosaiced mosaic image are calculatedr(x),∑fg(x),∑fb(x) The white point is considered when the following three conditions are simultaneously satisfied:
the conditions 1, 2 and 3 are sequentially from top to bottom; wherein condition 1 indicates that only when the value of the g channel of the pixel is between Gsmin and Gsmax, the point is counted to remove the influence of the extreme darkness and the extreme lightness; conditions 2 and 3 indicate that only if the absolute value of the difference between the r, b and g channel values of the pixel is less than GSration and fg(x) When the product of (a) and (b) is obtained, the point is considered as a white point; only when the three conditions are met simultaneously, the point is used for grey world statistics; the Gsmin, Gsmax and GSration are defined by those skilled in the artDebugging and determining;
when the number of white dots of one frame of image exceeds a set threshold value, the statistical result of the frame of image can be used for calculating a gray world white balance coefficient; averaging the results of the multiple frames of images to obtain a final output gray world white balance coefficient, wherein the formula for calculating the gray world white balance coefficient is as follows:
Figure BDA00003266588800111
and the gainR and gainB are white balance coefficients currently adopted by the r channel and the b channel to obtain the white balance coefficients GSgainR and GSgainB of the gray world.
During operation, as shown in fig. 1, after an image is collected by the image sensor, an optical signal is converted into an electrical signal, and the electrical signal is transmitted to the Bayer image processing unit in a Bayer image mode to output a Bayer image; step (I) of the above method is carried out.
One path of the output Bayer image passes through a gray edge statistical module and a white balance coefficient calculation module and then passes through a gray edge algorithm to obtain a gray edge white balance coefficient (step (II) for realizing the method), and one path of the gray edge white balance coefficient is output to a white balance correction module, and the other path of the gray edge white balance coefficient is output to a white balance coefficient calculation module; the other path of the output Bayer image passes through a white balance correction module, gray edge white balance correction processing is carried out according to an input gray edge white balance coefficient, then the image is output to a demosaicing module, and the Bayer image is converted into RGB through a color image interpolation algorithm; step (iii) of carrying out the above method.
Obtaining a gray world white balance coefficient (realizing the step (IV) of the method) by passing one path of an output image of the demosaicing module through a gray world statistical module and a white balance coefficient calculation module through a gray world algorithm, and outputting the gray world white balance coefficient to the white balance coefficient calculation module; the other path of the output image of the demosaicing module is sequentially output to the color image processing unit and the compression/display/storage device.
The white balance coefficient calculation module firstly uses the gray edge white balance coefficient input by the gray edge statistical module as gray edge constraint, limits the space of the solution of the gray world white balance coefficient input by the gray world statistical module within the range constrained by the gray edge, and then calculates an accurate solution in the limited solution space to obtain the white balance coefficient finally used for image correction; outputting the white balance coefficient finally used for image correction to a white balance correction module; step (v) of carrying out the above method.
And finally, the Bayer image output by the step (I) in the method finally passes through a white balance correction module, a demosaicing module, a color image processing unit and a compression/display/storage device according to a white balance coefficient finally used for image correction, so as to realize the step (VI) of the method.
The method is reasonable in design, and solves the problems that the existing gray world algorithm has high accuracy in most scenes, but is unstable in partial scenes (such as large-area monochromatic objects), and the gray edge algorithm has high robustness but has low accuracy.
Drawings
FIG. 1 is a block diagram of an imaging system of the apparatus of the present invention.
Fig. 2 is a flow chart of the main steps of the method of the present invention.
Detailed Description
The following detailed description of specific embodiments of the invention refers to the accompanying drawings.
Example 1
A white balance correction image processing method based on gray edge constraint gray world comprises the following steps:
(I), after gathering the image by image sensor, convert light signal into the signal of telecommunication to Bayer image processing unit is given to Bayer image mode, and it has mainly contained processing such as black level, bad point and denoising, output Bayer image.
Inputting the Bayer image output in the step (I) into a gray edge statistical module and a white balance coefficient calculation module on one hand, and performing a gray edge algorithm to obtain gray edge white balance coefficients GEgainR and GEgainB;
firstly, obtaining a Bayer image illumination estimation value e through a gray edge algorithm based on image block gradient or a gray edge algorithm based on image transverse mean smooth down-sampling and transverse first-order difference; the method comprises the following specific steps:
gray edge algorithm based on image block gradient:
firstly, uniformly dividing an image into Bw Bh blocks, wherein the size of each block is s, and averaging all pixels inside each block to obtain a pixel value
Figure BDA00003266588800131
Thus obtaining a small image with the size of Bw Bh; then, calculating a second-order gradient of the image by adopting a Discrete Laplacian operator (Discrete Laplacactor) in the following formula (1), and calculating the average gradient of each channel, thereby obtaining a Bayer image illumination estimation value e;
&Integral; ( Lap &CircleTimes; f c s ( x ) ) dx = ke c , c &Element; { r , g , b } , - - - ( 1 )
wherein,
Figure BDA00003266588800141
representing a small image of the c-channel image after s-s block averaging operation; lap is a Discrete Laplacian (Discrete Laplacian operator): Lap = 1 1 1 1 - 8 1 1 1 1 ;
the gradient calculation is based on the result of the image block averaging, which hides an assumption: the mean of the differences of the average reflections of the neighboring blocks of all physical surfaces in the scene is achromatic (grey); in addition, the block averaging operation of an image also has several intuitive meanings: firstly, block averaging of an image is smooth denoising of the image, and the smooth denoising of the image is proved to be an important preprocessing process capable of improving the robustness of a white balance algorithm, for example, a general gray world method and a gray edge method both adopt Gaussian convolution to smooth the image; secondly, after the image is subjected to block averaging, the image size is only 1/(s × s) of the original image, so that the calculation amount of subsequent image processing is reduced.
Gray edge algorithm based on image transverse mean smooth down-sampling and transverse first-order difference:
this algorithm addresses two important steps in the gray edge algorithm, according to the nature of the gray edge assumption: and Gaussian smoothing and high-order derivative calculation, and a simplified algorithm based on image transverse mean smooth down-sampling and transverse first-order difference is provided. For gaussian smoothing, the essence is to perform high frequency filtering, which can be replaced by other high frequency filtering methods. Obviously, the simplest filtering method is mean filtering. Considering the characteristic of line-by-line transmission of image data of an image sensor, if horizontal and vertical bidirectional mean filtering is carried out, multiple lines of images must be buffered. In order to meet the real-time requirement, the method only adopts transverse mean filtering. However, since no longitudinal filtering is employed, the filtering effect will be reduced. In order to further filter high-frequency information, a filtering method of horizontal mean smooth down-sampling is adopted in consideration that image down-sampling is also a high-frequency removing method.
In general, the image after mean smoothing is divided by the same smoothing template size N to normalize the pixel values to a range of 0 to 255 (when the pixel depth is 8 bits). However, the white balance algorithm only needs to obtain the final illumination color estimate and does not need to obtain the complete normalized smooth image, so the division operation can be omitted to simplify the algorithm.
Setting an N-order down-sampling filtering template as follows:
the image after the horizontal mean smooth down-sampling is
f N ( m &prime; , n ) = &Sigma; m = ( N - 1 ) m &prime; Nm &prime; - 1 f ( m , n )
Wherein, N belongs to [1, width (f (m, N)) ], namely the down-sampling template is larger than 1 and smaller than the image width;
for the derivation of the gradient, it is necessary to differentiate in the transverse direction and the longitudinal direction, even in the oblique direction. Similarly, in order to achieve the effect of solving the gradient of the image without buffering the image lines, a gradient solving mode of horizontal first-order difference is adopted, and the difference template is as follows:
T=[1 -1]
the difference image is a convolution of the down-sampled image and the difference template
For minkowski norm p values, to further reduce the computational complexity and thus the multiplication and squaring operations, the p value is taken to be 1, i.e. no minkowski norm is introduced;
when estimating the illumination color, it is generally necessary to find the color mean of the differential image. At this time, all three channels of RGB need to be divided by the same number of image pixels ^ dx. In fact, if division is not performed, only the values of α, β, γ of the illumination color (let the illumination color e = (α, β, γ)) are linearly scaled up proportionally, but the proportional relationship between them does not change. Also to simplify the algorithm, the division in illumination estimation can be omitted. Finally, it can be derived that the illumination estimate of the scene is:
&Integral; f T N ( x ) dx = ke N
wherein, f T N ( x ) = | f N &CircleTimes; T | ,
thereby obtaining the Bayer image illumination estimated value e.
Then, according to the illumination estimation value e = [ e ]r,eg,eb]TCorrection of Bayer images to standard illumination e &prime; = [ e &prime; r , e &prime; g , e &prime; b ] T = [ 1 3 , 1 3 , 1 3 ] T Output image f' (x):
f &prime; c ( x ) = e &prime; c e c f c ( x ) , c &Element; { r , g , b }
then the gray edge white balance coefficient GEgainR is equal to
Figure BDA00003266588800165
For the same reason, GegainB is equal to
Figure BDA00003266588800166
I.e. a gray edge constraint is obtained.
And (III) on the other hand, the Bayer image output in the step (I) is subjected to white balance correction processing through a white balance correction module according to the gray edge white balance coefficient obtained in the step (II), demosaicing processing is performed through a demosaicing module, and the Bayer image is converted into RGB through a color image interpolation algorithm.
(IV) sequentially passing the image output in the step (III) through a grey world statistical module and a white balance coefficient calculation module to perform grey world algorithm to obtain grey world white balance coefficients GSgainR and GSgainB;
the method comprises the following specific steps:
firstly, the number of white points of each frame of the output image and the accumulated value sigma f of the white points r, g and b in each frame of the image are calculatedr(x),∑fg(x),∑fb(x) The white point is considered when the following three conditions are simultaneously satisfied:
Figure BDA00003266588800171
the conditions 1, 2 and 3 are sequentially from top to bottom; wherein condition 1 indicates that only when the value of the g channel of the pixel is between Gsmin and Gsmax, the point is counted to remove the influence of the extreme darkness and the extreme lightness; conditions 2 and 3 indicate that only if the absolute value of the difference between the r, b and g channel values of the pixel is less than GSration and fg(x) When the product of (a) and (b) is obtained, the point is considered as a white point; only when the three conditions are met simultaneously, the point is used for grey world statistics; the Gsmin, Gsmax and GSration are determined by the person skilled in the art, and can generally take empirical values: gsmin =10, Gsmax =250, and GSration = 0.1.
When the number of white dots of one frame of image exceeds a set threshold value, the statistical result of the frame of image can be used for calculating a gray world white balance coefficient; averaging the results of the multiple frames of images to obtain a final output gray world white balance coefficient, wherein the formula for calculating the gray world white balance coefficient is as follows:
Figure BDA00003266588800172
wherein, gainR, gainB are white balance coefficients currently adopted by r, b channels, and GSgainR, GSgainB are grey world white balance coefficients.
And (V) utilizing the gray edge white balance coefficient obtained in the step (II) to constrain the gray world white balance coefficient obtained in the step (IV), and obtaining a white balance coefficient finally used for image correction through a white balance coefficient calculation module:
the method comprises the following specific steps:
setting two white balance threshold parameters of limit1 and limit2 (the values of limit1 and limit2 are determined by debugging of a person skilled in the art, generally, the empirical values of limit1=0.1 and limit2= 0.3) are taken, firstly, calculating a difference value absgainR between GEgainR and GSgainR, namely absgainR = | GEgainR-GSgainR |, and if absgainR is not more than limit1, indicating that the white balance coefficient of the gray world is close to the white balance coefficient of the gray edge, then adopting a white balance accurate solution obtained by a gray world algorithm; if the absgainR is greater than limit2, indicating that the gray world white balance coefficient is far away from the gray edge white balance coefficient, then obtaining a white balance solution by using a gray edge algorithm; and if the other conditions are in the critical region, taking the weighted average of the two as the solution of white balance by adopting the following formula:
Figure BDA00003266588800181
where gainR _ new, gainB _ new are the r, b-channel white balance coefficients finally used for correction, and the white balance coefficient of the g-channel is always set to 1.
(VI), according to the white balance coefficient obtained in the step (V), returning to the step (III) to sequentially perform white balance correction processing and demosaicing processing on the Bayer image again; then, after color image processing (mainly including color correction, γ correction, color space conversion, HDR, boundary enhancement, and the like) is performed, the image is input to a device such as compression/display/storage (for image display or image storage), and image processing is completed.
In specific implementation, as shown in fig. 1, step (i) is implemented by an image sensor and a Bayer image processing unit; the step (II) is realized by a gray edge statistical module and a white balance coefficient calculation module; the step (III) is realized by a white balance correction module and a demosaicing module; the step (IV) is realized by a grey world statistics module and a white balance coefficient calculation module; the step (V) is realized by a white balance coefficient calculation module; and (VI) the step is realized by a white balance correction module, a demosaicing module, a color image processing unit and a compression/display/storage device.
Example 2
As shown in fig. 1, a white balance correction image processing apparatus that constrains a gray world based on a gray edge includes:
an image sensor that outputs an image to a Bayer image processing unit in a Bayer image pattern; the Bayer image processing unit outputs a Bayer image;
on one hand, the output Bayer image is subjected to gray edge algorithm sequentially through a gray edge statistical module and a white balance coefficient calculation module to obtain gray edge white balance coefficients GEgainR and GEgainB, and the gray edge white balance coefficients GEgainR and GEgainB are output to a white balance correction module;
on the other hand, the output Bayer image is subjected to white balance correction processing through the white balance correction module according to the obtained gray edge white balance coefficient and is output to the demosaicing module; the demosaicing module carries out demosaicing processing and outputs a demosaiced image;
on one hand, the output demosaiced image is subjected to gray world algorithm sequentially through a gray world statistical module and a white balance coefficient calculation module to obtain gray world white balance coefficients GSgainR and GSgainB, and the gray world white balance coefficients GSgainR and GSgainB are output to the white balance coefficient calculation module;
the white balance coefficient calculation module obtains a white balance coefficient finally used for image correction by utilizing a gray world white balance coefficient obtained by the obtained gray edge white balance coefficient constraint, and outputs the white balance coefficient to the white balance correction module; the method comprises the following specific steps: setting two white balance threshold parameters of limit1 and limit2 (the values of limit1 and limit2 are determined by debugging of a person skilled in the art, generally, the empirical values of limit1=0.1 and limit2= 0.3) are taken, firstly, calculating a difference value absgainR between GEgainR and GSgainR, namely absgainR = | GEgainR-GSgainR |, and if absgainR is not more than limit1, indicating that the white balance coefficient of the gray world is close to the white balance coefficient of the gray edge, then adopting a white balance accurate solution obtained by a gray world algorithm; if the absgainR is greater than limit2, indicating that the gray world white balance coefficient is far away from the gray edge white balance coefficient, then obtaining a white balance solution by using a gray edge algorithm; and if the other conditions are in the critical region, taking the weighted average of the two as the solution of white balance by adopting the following formula:
wherein,
gainR _ new, gainB _ new are the r, b channel white balance coefficients finally used for correction, the white balance coefficient of the g channel is always set to 1;
the white balance correction module performs white balance correction processing on the Bayer image again according to a white balance coefficient finally used for image correction and outputs the Bayer image to the demosaicing module; the output demosaicing module outputs the image to a color image processing unit; the color image processing unit processes the image and outputs the image to a compression/display/storage device or the like.
In specific implementation, a gray edge algorithm performed by the gray edge statistical module and the white balance coefficient calculation module adopts a gray edge algorithm based on image block gradients or a gray edge algorithm based on image transverse mean smooth down-sampling and transverse first-order difference to obtain a Bayer image illumination estimation value e, and then gray edge white balance coefficients GEgainR and GEgainB are obtained according to the Bayer image illumination estimation value e; the method comprises the following specific steps:
gray edge algorithm based on image block gradient: firstly, uniformly dividing an image into Bw Bh blocks, wherein the size of each block is s, and averaging all pixels inside each block to obtain a pixel value
Figure BDA00003266588800202
Thus obtaining a small image with the size of Bw Bh; then, calculating a second-order gradient of the image by adopting a discrete Laplacian operator in the following formula (1), and calculating the average gradient of each channel to obtain a Bayer image illumination estimation value e;
&Integral; ( Lap &CircleTimes; f c s ( x ) ) dx = ke c , c &Element; { r , g , b } , - - - ( 1 )
wherein,
Figure BDA00003266588800212
representing a small image of the c-channel image after s-s block averaging operation; lap is a discrete Laplacian: Lap = 1 1 1 1 - 8 1 1 1 1 ;
gray edge algorithm based on image transverse mean smooth down-sampling and transverse first-order difference: setting an N-order down-sampling filtering template as follows:
Figure BDA00003266588800214
the image after the horizontal mean smooth down-sampling is
f N ( m &prime; , n ) = &Sigma; m = ( N - 1 ) m &prime; Nm &prime; - 1 f ( m , n )
Wherein, N belongs to [1, width (f (m, N)) ], namely the down-sampling template is larger than 1 and smaller than the image width;
the gradient solving mode of transverse first-order difference is adopted, and the difference template is as follows:
T=[1 -1]
the difference image is a convolution of the down-sampled image and the difference template
Figure BDA00003266588800216
For minkowski norm values, take the value of p to 1, i.e. no minkowski norm is introduced;
an illumination estimate for the scene can be derived as:
&Integral; f T N ( x ) dx = ke N
wherein, f T N ( x ) = | f N &CircleTimes; T | ,
thereby obtaining a Bayer image illumination estimation value e;
then, according to the illumination estimation value e = [ e ]r,eg,eb]TCorrection of Bayer images to standard illumination e &prime; = [ e &prime; r , e &prime; g , e &prime; b ] T = [ 1 3 , 1 3 , 1 3 ] T Output image f' (x):
f &prime; c ( x ) = e &prime; c e c f c ( x ) , c &Element; { r , g , b }
then the gray edge white balance coefficient GEgainR is equal toFor the same reason, GegainB is equal to
Figure BDA00003266588800224
I.e. a gray edge constraint is obtained.
In specific implementation, the gray world algorithm performed by the gray world statistics module and the white balance coefficient calculation module adopts the following method:
firstly, the number of white points of each frame of the demosaiced mosaic image and the accumulated value sigma f of the white points r, g and b in each frame of the demosaiced mosaic image are calculatedr(x),∑fg(x),∑fb(x) The white point is considered when the following three conditions are simultaneously satisfied:
Figure BDA00003266588800225
the conditions 1, 2 and 3 are sequentially from top to bottom; wherein condition 1 indicates that only when the value of the g channel of the pixel is between Gsmin and Gsmax, the point is counted to remove the influence of the extreme darkness and the extreme lightness; conditions 2 and 3 indicate that only if the absolute value of the difference between the r, b and g channel values of the pixel is less than GSration and fg(x) When the product of (a) and (b) is obtained, the point is considered as a white point; only when the three conditions are met simultaneously, the point is used for grey world statistics; the Gsmin, Gsmax and GSration are determined by the person skilled in the art, and can generally take empirical values: gsmin =10, Gsmax =250, and GSration = 0.1.
When the number of white dots of one frame of image exceeds a set threshold value, the statistical result of the frame of image can be used for calculating a gray world white balance coefficient; averaging the results of the multiple frames of images to obtain a final output gray world white balance coefficient, wherein the formula for calculating the gray world white balance coefficient is as follows:
and the gainR and gainB are white balance coefficients currently adopted by the r channel and the b channel to obtain the white balance coefficients GSgainR and GSgainB of the gray world.
In specific implementation, the demosaicing module converts the Bayer image into RGB through a color image interpolation algorithm and outputs a demosaiced image.

Claims (10)

1. A white balance correction image processing method based on a gray edge constraint gray world is characterized in that: the method comprises the following steps:
after an image is collected by an image sensor, converting an optical signal into an electric signal, transmitting the electric signal to a Bayer image processing unit in a Bayer image mode, and outputting a Bayer image;
inputting the Bayer image output in the step (I) into a gray edge statistical module and a white balance coefficient calculation module on one hand, and performing a gray edge algorithm to obtain gray edge white balance coefficients GEgainR and GEgainB;
(III) on the other hand, the Bayer image output in the step (I) is subjected to white balance correction processing through a white balance correction module according to the gray edge white balance coefficient obtained in the step (II), and is subjected to demosaicing processing through a demosaicing module;
(IV) sequentially passing the image output in the step (III) through a grey world statistical module and a white balance coefficient calculation module to perform grey world algorithm to obtain grey world white balance coefficients GSgainR and GSgainB;
and (V) utilizing the gray edge white balance coefficient obtained in the step (II) to constrain the gray world white balance coefficient obtained in the step (IV), and obtaining a white balance coefficient finally used for image correction through a white balance coefficient calculation module: the method comprises the following specific steps:
setting two white balance threshold parameters of limit1 and limit2, firstly calculating a difference value absgainR between GEgainR and GSgainR, namely absgainR = | GEgainR-GSgainR |, and if absgainR is not greater than limit1, indicating that a grey world white balance coefficient is close to a grey edge white balance coefficient, then adopting a grey world algorithm to obtain a white balance accurate solution; if the absgainR is greater than limit2, indicating that the gray world white balance coefficient is far away from the gray edge white balance coefficient, then obtaining a white balance solution by using a gray edge algorithm; and if the other conditions are in the critical region, taking the weighted average of the two as the solution of white balance by adopting the following formula:
Figure FDA00003266588700021
wherein, gainR _ new, gainB _ new are the r, b channel white balance coefficients finally used for correction, and the white balance coefficient of the g channel is always set to 1;
(VI), according to the white balance coefficient obtained in the step (V), returning to the step (III) to sequentially perform white balance correction processing and demosaicing processing on the Bayer image again; then, after color image processing, the image enters a compression/display/storage device to complete the processing of the image.
2. The method of claim 1, wherein the method comprises: obtaining a Bayer image illumination estimation value e by adopting a gray edge algorithm based on image block gradient or a gray edge algorithm based on image transverse mean smooth downsampling and transverse first-order difference in the gray edge algorithm in the step (II), and then obtaining gray edge white balance coefficients GEgainR and GEgainB according to the Bayer image illumination estimation value e; the method comprises the following specific steps:
gray edge algorithm based on image block gradient: firstly, uniformly dividing an image into Bw Bh blocks, wherein the size of each block is s, and averaging all pixels inside each block to obtain a pixel value
Figure FDA00003266588700024
Thus obtaining a small image with the size of Bw Bh; then, calculating a second-order gradient of the image by adopting a discrete Laplacian operator in the following formula (1), and calculating the average gradient of each channel to obtain a Bayer image illumination estimation value e;
&Integral; ( Lap &CircleTimes; f c s ( x ) ) dx = ke c , c &Element; { r , g , b } , - - - ( 1 )
wherein,
Figure FDA00003266588700023
representing c-channel images after s-s block averagingA small image; lap is a discrete Laplacian: Lap = 1 1 1 1 - 8 1 1 1 1 ;
gray edge algorithm based on image transverse mean smooth down-sampling and transverse first-order difference: setting an N-order down-sampling filtering template as follows:
Figure FDA00003266588700032
the image after the horizontal mean smooth down-sampling is
f N ( m &prime; , n ) = &Sigma; m = ( N - 1 ) m &prime; Nm &prime; - 1 f ( m , n )
Wherein, N belongs to [1, width (f (m, N)) ], namely the down-sampling template is larger than 1 and smaller than the image width;
the gradient solving mode of transverse first-order difference is adopted, and the difference template is as follows:
T=[1 -1]
the difference image is a convolution of the down-sampled image and the difference template
Figure FDA00003266588700034
For minkowski norm values, take the value of p to 1, i.e. no minkowski norm is introduced;
an illumination estimate for the scene can be derived as:
&Integral; f T N ( x ) dx = ke N
wherein, f T N ( x ) = | f N &CircleTimes; T | ,
thereby obtaining a Bayer image illumination estimation value e;
then, according to the illumination estimation value e = [ e ]r,eg,eb]TCorrection of Bayer images to standard illumination e &prime; = [ e &prime; r , e &prime; g , e &prime; b ] T = [ 1 3 , 1 3 , 1 3 ] T Output image f' (x):
f &prime; c ( x ) = e &prime; c e c f c ( x ) , c &Element; { r , g , b }
then the gray edge white balance coefficient GEgainR is equal to
Figure FDA00003266588700041
For the same reason, GegainB is equal to
Figure FDA00003266588700042
I.e. a gray edge constraint is obtained.
3. The method of image processing for white balance correction based on gray-edge-constrained gray world as claimed in claim 1 or 2, wherein: the grey world algorithm in the step (IV) adopts the following method:
firstly, the number of white points of each frame of the output image of the step (III) and the accumulated value sigma f of the white points r, g and b in each frame of the image are calculatedr(x),∑fg(x),∑fb(x) The white point is considered when the following three conditions are simultaneously satisfied:
the conditions 1, 2 and 3 are sequentially from top to bottom; wherein condition 1 indicates that only when the value of the g channel of the pixel is between Gsmin and Gsmax, the point is counted to remove the influence of the extreme darkness and the extreme lightness; conditions 2 and 3 indicate that only if the absolute value of the difference between the r, b and g channel values of the pixel is less than GSration and fg(x) When the product of (a) and (b) is obtained, the point is considered as a white point; only when the three conditions are met simultaneously, the point is used for grey world statistics;
when the number of white dots of one frame of image exceeds a set threshold value, the statistical result of the frame of image can be used for calculating a gray world white balance coefficient; averaging the results of the multiple frames of images to obtain a final output gray world white balance coefficient, wherein the formula for calculating the gray world white balance coefficient is as follows:
Figure FDA00003266588700051
and the gainR and gainB are white balance coefficients currently adopted by the r channel and the b channel to obtain the white balance coefficients GSgainR and GSgainB of the gray world.
4. The method of image processing for white balance correction based on gray-edge-constrained gray world as claimed in claim 1 or 2, wherein: and (3) converting the Bayer image into RGB through a color image interpolation algorithm by the demosaic module in the step (III).
5. The method of claim 3, wherein the image processing method comprises: and (3) converting the Bayer image into RGB through a color image interpolation algorithm by the demosaic module in the step (III).
6. A white balance correction image processing apparatus that constrains a gray world based on a gray edge, characterized in that: the method comprises the following steps:
an image sensor that outputs an image to a Bayer image processing unit in a Bayer image pattern; the Bayer image processing unit outputs a Bayer image;
on one hand, the output Bayer image is subjected to gray edge algorithm sequentially through a gray edge statistical module and a white balance coefficient calculation module to obtain gray edge white balance coefficients GEgainR and GEgainB, and the gray edge white balance coefficients GEgainR and GEgainB are output to a white balance correction module;
on the other hand, the output Bayer image is subjected to white balance correction processing through the white balance correction module according to the obtained gray edge white balance coefficient and is output to the demosaicing module; the demosaicing module carries out demosaicing processing and outputs a demosaiced image;
the output demosaiced image is subjected to gray world algorithm sequentially through a gray world statistical module and a white balance coefficient calculation module to obtain gray world white balance coefficients GSgainR and GSgainB, and the gray world white balance coefficients GSgainR and GSgainB are output to a white balance coefficient calculation module;
the white balance coefficient calculation module obtains a white balance coefficient finally used for image correction by utilizing a gray world white balance coefficient obtained by the obtained gray edge white balance coefficient constraint, and outputs the white balance coefficient to the white balance correction module; the method comprises the following specific steps: setting two white balance threshold parameters of limit1 and limit2, firstly calculating a difference value absgainR between GEgainR and GSgainR, namely absgainR = | GEgainR-GSgainR |, and if absgainR is not greater than limit1, indicating that a grey world white balance coefficient is close to a grey edge white balance coefficient, then adopting a grey world algorithm to obtain a white balance accurate solution; if the absgainR is greater than limit2, indicating that the gray world white balance coefficient is far away from the gray edge white balance coefficient, then obtaining a white balance solution by using a gray edge algorithm; and if the other conditions are in the critical region, taking the weighted average of the two as the solution of white balance by adopting the following formula:
Figure FDA00003266588700061
wherein, gainR _ new, gainB _ new are the r, b channel white balance coefficients finally used for correction, and the white balance coefficient of the g channel is always set to 1;
the white balance correction module performs white balance correction processing on the Bayer image again according to a white balance coefficient finally used for image correction and outputs the Bayer image to the demosaicing module; the output demosaicing module outputs the image to a color image processing unit; the color image processing unit processes the image and outputs the image to a compression/display/storage device.
7. The image processing apparatus for correcting white balance based on a gray-edge-constrained gray world according to claim 6, wherein: a gray edge algorithm performed by the gray edge statistical module and the white balance coefficient calculation module adopts a gray edge algorithm based on image block gradient or a gray edge algorithm based on image transverse mean smooth down-sampling and transverse first-order difference to obtain a Bayer image illumination estimation value e, and then gray edge white balance coefficients GEgainR and GEgainB are obtained according to the Bayer image illumination estimation value e; the method comprises the following specific steps:
gray edge algorithm based on image block gradient: firstly, uniformly dividing an image into Bw Bh blocks, wherein the size of each block is s, and averaging all pixels inside each block to obtain a pixel value
Figure FDA00003266588700071
Thus obtaining a small image with the size of Bw Bh; then, calculating a second-order gradient of the image by adopting a discrete Laplacian operator in the following formula (1), and calculating the average gradient of each channel to obtain a Bayer image illumination estimation value e;
&Integral; ( Lap &CircleTimes; f c s ( x ) ) dx = ke c , c &Element; { r , g , b } , - - - ( 1 )
wherein,
Figure FDA00003266588700073
representing a small image of the c-channel image after s-s block averaging operation; lap is a discrete Laplacian: Lab = 1 1 1 1 - 8 1 1 1 1 ;
gray edge algorithm based on image transverse mean smooth down-sampling and transverse first-order difference: setting an N-order down-sampling filtering template as follows:
Figure FDA00003266588700075
the image after the horizontal mean smooth down-sampling is
f N ( m &prime; , n ) = &Sigma; m = ( N - 1 ) m &prime; Nm &prime; - 1 f ( m , n )
Wherein, N belongs to [1, width (f (m, N)) ], namely the down-sampling template is larger than 1 and smaller than the image width;
the gradient solving mode of transverse first-order difference is adopted, and the difference template is as follows:
T=[1 -1]
the difference image is a convolution of the down-sampled image and the difference template
Figure FDA00003266588700081
For minkowski norm values, take the value of p to 1, i.e. no minkowski norm is introduced;
an illumination estimate for the scene can be derived as:
&Integral; f T N ( x ) dx = ke N
wherein, f T N ( x ) = | f N &CircleTimes; T | ,
thereby obtaining a Bayer image illumination estimation value e;
then, according to the illumination estimation value e = [ e ]r,eg,eb]TCorrection of Bayer images to standard illumination e &prime; = [ e &prime; r , e &prime; g , e &prime; b ] T = [ 1 3 , 1 3 , 1 3 ] T Output image f' (x):
f &prime; c ( x ) = e &prime; c e c f c ( x ) , c &Element; { r , g , b }
then the gray edge white balance coefficient GEgainR is equal to
Figure FDA00003266588700086
For the same reason, GegainB is equal to
Figure FDA00003266588700087
I.e. a gray edge constraint is obtained.
8. The image processing apparatus for correcting a white balance based on a gray edge-constrained gray world according to claim 6 or 7, characterized in that: the grey world algorithm performed by the grey world statistic module and the white balance coefficient calculation module adopts the following method:
firstly, the number of white points of each frame of the demosaiced mosaic image and the accumulated value sigma f of the white points r, g and b in each frame of the demosaiced mosaic image are calculatedr(x),∑fg(x),∑fb(x) The white point is considered when the following three conditions are simultaneously satisfied:
the conditions 1, 2 and 3 are sequentially from top to bottom; wherein condition 1 indicates that only when the value of the g channel of the pixel is between Gsmin and Gsmax, the point is counted to remove the influence of the extreme darkness and the extreme lightness; conditions 2 and 3 indicate that only if the absolute value of the difference between the r, b and g channel values of the pixel is less than GSration and fg(x) When the product of (a) and (b) is obtained, the point is considered as a white point; only when the three conditions are met simultaneously, the point is used for grey world statistics;
when the number of white dots of one frame of image exceeds a set threshold value, the statistical result of the frame of image can be used for calculating a gray world white balance coefficient; averaging the results of the multiple frames of images to obtain a final output gray world white balance coefficient, wherein the formula for calculating the gray world white balance coefficient is as follows:
and the gainR and gainB are white balance coefficients currently adopted by the r channel and the b channel to obtain the white balance coefficients GSgainR and GSgainB of the gray world.
9. The image processing apparatus for correcting a white balance based on a gray edge-constrained gray world according to claim 6 or 7, characterized in that: the demosaicing module converts the Bayer image into RGB through a color image interpolation algorithm and outputs a demosaiced image.
10. The image processing apparatus for correcting white balance based on a gray-edge-constrained gray world according to claim 8, wherein: the demosaicing module converts the Bayer image into RGB through a color image interpolation algorithm and outputs a demosaiced image.
CN201310205857.2A 2013-05-29 2013-05-29 White balance corrected image processing method and device based on gray edge constraint gray world Active CN103313068B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310205857.2A CN103313068B (en) 2013-05-29 2013-05-29 White balance corrected image processing method and device based on gray edge constraint gray world

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310205857.2A CN103313068B (en) 2013-05-29 2013-05-29 White balance corrected image processing method and device based on gray edge constraint gray world

Publications (2)

Publication Number Publication Date
CN103313068A true CN103313068A (en) 2013-09-18
CN103313068B CN103313068B (en) 2017-02-08

Family

ID=49137785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310205857.2A Active CN103313068B (en) 2013-05-29 2013-05-29 White balance corrected image processing method and device based on gray edge constraint gray world

Country Status (1)

Country Link
CN (1) CN103313068B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107545550A (en) * 2017-08-25 2018-01-05 安庆师范大学 Cell image color cast correction
CN108184111A (en) * 2017-12-29 2018-06-19 上海安翰医疗技术有限公司 White balance correcting, endoscope and storage medium based on FPGA registers
CN109618145A (en) * 2018-12-13 2019-04-12 深圳美图创新科技有限公司 Color constancy bearing calibration, device and image processing equipment
CN110022469A (en) * 2019-04-09 2019-07-16 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN112243118A (en) * 2019-07-18 2021-01-19 浙江宇视科技有限公司 White balance correction method, device, equipment and storage medium
CN114697483A (en) * 2020-12-31 2022-07-01 复旦大学 Device and method for shooting under screen based on compressed sensing white balance algorithm

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002305755A (en) * 2001-04-06 2002-10-18 Canon Inc Image processing unit, control method for the image processing unit, control program, and recording medium for providing the control program
CN1994000A (en) * 2004-06-25 2007-07-04 高通股份有限公司 Automatic white balance method and apparatus
CN101175143A (en) * 2006-11-03 2008-05-07 普立尔科技股份有限公司 Digital picture capturing device and white balance adjustment method thereof
CN102883168A (en) * 2012-07-05 2013-01-16 上海大学 White balance processing method directed towards atypical-feature image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002305755A (en) * 2001-04-06 2002-10-18 Canon Inc Image processing unit, control method for the image processing unit, control program, and recording medium for providing the control program
CN1994000A (en) * 2004-06-25 2007-07-04 高通股份有限公司 Automatic white balance method and apparatus
CN101175143A (en) * 2006-11-03 2008-05-07 普立尔科技股份有限公司 Digital picture capturing device and white balance adjustment method thereof
CN102883168A (en) * 2012-07-05 2013-01-16 上海大学 White balance processing method directed towards atypical-feature image

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107545550A (en) * 2017-08-25 2018-01-05 安庆师范大学 Cell image color cast correction
CN107545550B (en) * 2017-08-25 2020-04-10 安庆师范大学 Cell image color cast correction method
CN108184111A (en) * 2017-12-29 2018-06-19 上海安翰医疗技术有限公司 White balance correcting, endoscope and storage medium based on FPGA registers
CN108184111B (en) * 2017-12-29 2021-04-02 上海安翰医疗技术有限公司 White balance correction method based on FPGA register, endoscope and storage medium
CN109618145A (en) * 2018-12-13 2019-04-12 深圳美图创新科技有限公司 Color constancy bearing calibration, device and image processing equipment
CN109618145B (en) * 2018-12-13 2020-11-10 深圳美图创新科技有限公司 Color constancy correction method and device and image processing equipment
CN110022469A (en) * 2019-04-09 2019-07-16 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110022469B (en) * 2019-04-09 2021-03-02 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN112243118A (en) * 2019-07-18 2021-01-19 浙江宇视科技有限公司 White balance correction method, device, equipment and storage medium
CN114697483A (en) * 2020-12-31 2022-07-01 复旦大学 Device and method for shooting under screen based on compressed sensing white balance algorithm
CN114697483B (en) * 2020-12-31 2023-10-10 复旦大学 Under-screen camera shooting device and method based on compressed sensing white balance algorithm

Also Published As

Publication number Publication date
CN103313068B (en) 2017-02-08

Similar Documents

Publication Publication Date Title
CN103313068B (en) White balance corrected image processing method and device based on gray edge constraint gray world
US8948506B2 (en) Image processing device, image processing method, and program
KR101633946B1 (en) Image processing device, image processing method, and recording medium
EP2278788B1 (en) Method and apparatus for correcting lens shading
CN102663719B (en) Bayer-pattern CFA image demosaicking method based on non-local mean
US9253459B2 (en) Image processing apparatus and image processing method, and program
Chung et al. Demosaicing of color filter array captured images using gradient edge detection masks and adaptive heterogeneity-projection
CN103595980B (en) Based on the color filter array image demosaicing method of outline non-local mean value
Fang et al. Single image dehazing and denoising: a fast variational approach
US9870600B2 (en) Raw sensor image and video de-hazing and atmospheric light analysis methods and systems
CN103595981B (en) Based on the color filter array image demosaicing method of non-local low rank
CN105809630B (en) A kind of picture noise filter method and system
CN103347190B (en) Edge-related and color-combined demosaicing and amplifying method
CN104620282A (en) Methods and systems for suppressing noise in images
EP3891693A1 (en) Image processor
CN110378848B (en) Image defogging method based on derivative map fusion strategy
CN113822830B (en) Multi-exposure image fusion method based on depth perception enhancement
CN111539893A (en) Bayer image joint demosaicing denoising method based on guided filtering
CN112529854A (en) Noise estimation method, device, storage medium and equipment
US20080285868A1 (en) Simple Adaptive Wavelet Thresholding
Zhang et al. Single image dehazing based on fast wavelet transform with weighted image fusion
CN112070683B (en) Underwater polarized image restoration method based on polarization and wavelength attenuation combined optimization
CN103685858A (en) Real-time video processing method and equipment
CN107945119B (en) Method for estimating correlated noise in image based on Bayer pattern
CN112308785A (en) Image denoising method, storage medium and terminal device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: HUNAN VISIONSPLEND OPTOELECTRONIC TECHNOLOGY CO.,

Free format text: FORMER OWNER: SHANXI GREEN ELECTRO-OPTIC INDUSTRY TECHNOLOGY INSTITUTE (CO., LTD.)

Effective date: 20140110

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 033300 LVLIANG, SHAANXI PROVINCE TO: 410073 CHANGSHA, HUNAN PROVINCE

TA01 Transfer of patent application right

Effective date of registration: 20140110

Address after: 410073 Hunan province Changsha Kaifu District, 31 Road No. 303 Building 5 floor A Di Shang Yong

Applicant after: HUNAN VISION SPLEND PHOTOELECTRIC TECHNOLOGY Co.,Ltd.

Address before: 033300 Shanxi city of Lvliang province Liulin County Li Jia Wan Xiang Ge duo Cun Bei River No. 1

Applicant before: SHANXI GREEN OPTOELECTRONIC INDUSTRY SCIENCE AND TECHNOLOGY RESEARCH INSTITUTE (CO., LTD.)

C14 Grant of patent or utility model
GR01 Patent grant