Background
Color is the basis of an image and is also the visual information of the image. On one hand, color information of an image is collected for human viewing, and on the other hand, the color information of the image is widely used in computer vision research, such as feature extraction, object recognition, image retrieval, and the like, as an important clue. However, under different illumination conditions, the colors reflected by the object are different, and the purpose of white balance is to eliminate the influence of different illumination and restore the real color of the object under standard illumination.
Image illumination estimation is the first step of white balance calculation, and is often the most important and difficult step. The result of the illumination estimation can often be directly used to correct the color shift of the image, for example, in the white balance of the camera, the gain value of each channel of red, green and blue of the camera is adjusted by directly using the color shift of the illumination.
The existing illumination estimation method has two classical algorithms of a gray world and a gray edge. The grey world assumption holds that: the average reflection of all physical surfaces in the scene is achromatic (grey). Under this assumption, the statistical mean of each color channel of the image is achromatic under white illumination, i.e. the mean of each channel is equally large, and if there is a difference in the statistical mean between different channels of the image, this difference must be due to ambient light. Based on this assumption, there is a gray world approach. The gray world method is simple to calculate, but the effect is not ideal.
The gray edge hypothesis considers: the mean of the differences in reflection for all physical surfaces in the scene is achromatic (grey). The gray edge method is based on the assumption that the mean of the modes of the first-order or second-order gradients of each channel image is calculated first, then the image illumination is estimated by using the difference of the mean values of each channel, the images are blurred by using gaussian kernels with different sigma in the calculation process to calculate information of different scales of the images, and a Minkowski-norm (Minkowski-norm) is also introduced to obtain a generally-meaning gray edge method:
wherein f is
c(x) C-channel image of color image f (x), x being the image two-dimensional coordinates,
representing the image after gaussian convolution, n =0, 1, 2 representing the order of the image gradient, p being Minkowski-norm,
then the estimated illumination, k is a normalization constant such that e | = 1. It can be seen that the formula unifies the traditional gray world method, the maximum method, the Shade of Grey algorithm, and the gray edge method into one framework.
The gray edge approach has several limitations. First, although the gray edge method can be implemented by only a few simple lines of program codes, the computation process involves a gaussian kernel convolution process, which seriously affects the computation speed of the algorithm, such as the experimental implementation for the second-order gray edge method, 4<σ<The 7 effect is good, and if σ =4, the convolution kernel size is 25 × 25, and even if the convolution decomposition in the x direction and the y direction is performed, it is necessary to perform 2 times of convolution of 1 × 25 size, and this count is usedThe calculation amount is equivalent to the calculation amount of the gray world algorithm for 50 times. Secondly, if the values of the sigma and p parameters related to the method are inappropriate, a good effect is difficult to obtain, especially when no prior information exists in the input image. Thirdly, the method has complex and large calculation amount of image gradient, for example, the calculation formula of first order gradient is The first-order gradient in the x direction needs to be calculated firstly, and then the first-order gradient in the y direction needs to be calculated, square and square root operation is also involved, and the calculation of the second-order gradient is more complex. The gray edge method is simple in principle and greatly improved in effect, but the calculation process of the gray edge method relates to Gaussian convolution, the time complexity is high, and the selection of the size of a convolution kernel is lack of specific guidance.
In summary, the gray world algorithm has high accuracy in most scenes, but it is very unstable in some scenes (such as large area monochromatic objects); the gray edge algorithm is robust, but its accuracy is not high.
Disclosure of Invention
The invention provides an image processing method based on white balance correction of a gray edge constraint gray world, aiming at solving the problems of the existing gray world algorithm and the gray edge algorithm.
The invention is realized by adopting the following technical scheme:
a white balance correction image processing method based on gray edge constraint gray world comprises the following steps:
after an image is collected by an image sensor, converting an optical signal into an electric signal, transmitting the electric signal to a Bayer image processing unit in a Bayer image mode, and outputting a Bayer image;
inputting the Bayer image output in the step (I) into a gray edge statistical module and a white balance coefficient calculation module on one hand, and performing a gray edge algorithm to obtain gray edge white balance coefficients GEgainR and GEgainB;
(III) on the other hand, the Bayer image output in the step (I) is subjected to white balance correction processing through a white balance correction module according to the gray edge white balance coefficient obtained in the step (II), and is subjected to demosaicing processing through a demosaicing module;
(IV) sequentially passing the image output in the step (III) through a grey world statistical module and a white balance coefficient calculation module to perform grey world algorithm to obtain grey world white balance coefficients GSgainR and GSgainB;
and (V) utilizing the gray edge white balance coefficient obtained in the step (II) to constrain the gray world white balance coefficient obtained in the step (IV), and obtaining a white balance coefficient finally used for image correction through a white balance coefficient calculation module: the method comprises the following specific steps:
setting two white balance threshold parameters of limit1 and limit2 (the values of limit1 and limit2 can be determined by debugging of a person skilled in the art), firstly calculating a difference value absgainR between GEgainR and GSgainR, namely absgainR = | GEgainR-GSgainR |, and if absgainR is not greater than limit1, indicating that a white balance coefficient of a gray world is close to a white balance coefficient of a gray edge, adopting a white balance accurate solution obtained by a gray world algorithm; if the absgainR is greater than limit2, indicating that the gray world white balance coefficient is far away from the gray edge white balance coefficient, then obtaining a white balance solution by using a gray edge algorithm; and if the other conditions are in the critical region, taking the weighted average of the two as the solution of white balance by adopting the following formula:
wherein, gainR _ new, gainB _ new are the r, b channel white balance coefficients finally used for correction, and the white balance coefficient of the g channel is always set to 1;
(VI), according to the white balance coefficient obtained in the step (V), returning to the step (III) to sequentially perform white balance correction processing and demosaicing processing on the Bayer image again; then, after color image processing, the image enters a device such as a compression/display/storage device, and the image processing is completed.
Based on the method, the method has the core that gray edge constraint is obtained by solving with a gray edge algorithm, the space of the solution is limited in the range of the gray edge constraint, and the basic robustness of the algorithm is ensured; and solving an accurate solution in a limited solution space by utilizing a gray world algorithm. Both the gray edge algorithm and the gray world algorithm can be performed by using conventional algorithms in the prior art.
Preferably, the gray edge algorithm in the step (ii) adopts two more practical algorithms, which are respectively a gray edge algorithm based on image block gradients or a gray edge algorithm based on image transverse mean smooth down-sampling and transverse first-order difference, to obtain a Bayer image illumination estimation value e, and then gray edge white balance coefficients GEgainR and GEgainB are obtained according to the Bayer image illumination estimation value e; the method comprises the following specific steps:
gray edge algorithm based on image block gradient: firstly, uniformly dividing an image into Bw Bh blocks, wherein the size of each block is s, and averaging all pixels inside each block to obtain a pixel value
Thus obtaining a small image with the size of Bw Bh; then, calculating a second-order gradient of the image by adopting a discrete Laplacian operator in the following formula (1), and calculating the average gradient of each channel to obtain a Bayer image illumination estimation value e;
wherein,
representing a small image of the c-channel image after s-s block averaging operation; lap is a discrete Laplacian:
gray edge algorithm based on image transverse mean smooth down-sampling and transverse first-order difference: setting an N-order down-sampling filtering template as follows:
the image after the horizontal mean smooth down-sampling is
Wherein, N belongs to [1, width (f (m, N)) ], namely the down-sampling template is larger than 1 and smaller than the image width;
the gradient solving mode of transverse first-order difference is adopted, and the difference template is as follows:
T=[1-1]
the difference image is a convolution of the down-sampled image and the difference template
For minkowski norm values, take the value of p to 1, i.e. no minkowski norm is introduced;
an illumination estimate for the scene can be derived as:
wherein,
thereby obtaining a Bayer image illumination estimation value e;
then, according to the illumination estimation value e = [ e ]r,eg,eb]TCorrection of Bayer images to standard illumination Output image f' (x):
then the gray edge white balance coefficient GEgainR is equal to
For the same reason, GegainB is equal to
I.e. a gray edge constraint is obtained.
Preferably, the gray world algorithm in step (iv) employs the following method:
firstly, the number of white points of each frame of the output image of the step (III) and the accumulated value sigma f of the white points r, g and b in each frame of the image are calculatedr(x),∑fg(x),∑fb(x) When the following three conditions are satisfied simultaneouslyConsider the white point:
the conditions 1, 2 and 3 are sequentially from top to bottom; wherein condition 1 indicates that only when the value of the g channel of the pixel is between Gsmin and Gsmax, the point is counted to remove the influence of the extreme darkness and the extreme lightness; conditions 2 and 3 indicate that only if the absolute value of the difference between the r, b and g channel values of the pixel is less than GSration and fg(x) When the product of (a) and (b) is obtained, the point is considered as a white point; only when the three conditions are met simultaneously, the point is used for grey world statistics; the Gsmin, Gsmax and GSration are determined by the debugging of the technical personnel in the field;
when the number of white dots of one frame of image exceeds a set threshold value, the statistical result of the frame of image can be used for calculating a gray world white balance coefficient; averaging the results of the multiple frames of images to obtain a final output gray world white balance coefficient, wherein the formula for calculating the gray world white balance coefficient is as follows:
and the gainR and gainB are white balance coefficients currently adopted by the r channel and the b channel to obtain the white balance coefficients GSgainR and GSgainB of the gray world.
Based on the above process, as shown in fig. 2:
(1) solving by a gray edge algorithm to obtain gray edge constraints: firstly, correcting the image by using a gray edge algorithm to obtain a gray edge white balance coefficient. The invention provides two more practical gray edge algorithms, the first algorithm is a gray edge algorithm based on image block gradient, and a small image is obtained by uniformly partitioning an image and averaging all pixels inside each block to obtain a pixel value; and then, calculating a second-order gradient of the image by adopting a discrete Laplacian operator, and calculating the average gradient of each channel, thereby obtaining an estimated value of image illumination. The second method is a simplified algorithm based on image transverse mean smooth downsampling and transverse first-order difference, Gaussian smoothing is achieved by adopting a filtering method of transverse mean smooth downsampling, and then high-order derivatives are obtained through gradient calculation of the transverse first-order difference, so that an estimated value of image illumination can be obtained. After the image illumination estimated value is obtained, white balance correction processing can be carried out on the image, and meanwhile gray edge constraint, namely a gray edge white balance coefficient, can also be obtained.
(2) Solving an accurate solution in a range constrained by gray edges by using a gray world algorithm: and calculating a gray world white balance coefficient by using a gray world method based on the image subjected to the gray edge white balance correction, and finally obtaining a white balance coefficient finally used for image correction by combining the gray edge white balance coefficient and the gray world white balance coefficient. If the grey world white balance coefficient is close to the grey edge white balance coefficient, adopting the grey world white balance coefficient; if the distance is far away, adopting a gray edge white balance coefficient; otherwise, if the two conditions are in the critical region, the weighted average white balance coefficient of the two conditions is adopted.
The method has the following advantages:
(1) the calculation amount is small. For the gray edge statistical module and the gray world statistical module, the whole image needs to be counted, but only some simple judgment and calculation are needed, so that the resource consumption is less; for the white balance coefficient calculation module, although some logic relatively complex calculations are performed, the data amount is small, and therefore, the resource consumption is small.
(2) The accuracy of white balance correction is high.
Further, a white balance correction image processing apparatus that constrains a gray world based on a gray edge, characterized in that: the method comprises the following steps:
an image sensor that outputs an image to a Bayer image processing unit in a Bayer image pattern; the Bayer image processing unit outputs a Bayer image;
on one hand, the output Bayer image is subjected to gray edge algorithm sequentially through a gray edge statistical module and a white balance coefficient calculation module to obtain gray edge white balance coefficients GEgainR and GEgainB, and the gray edge white balance coefficients GEgainR and GEgainB are output to a white balance correction module;
on the other hand, the output Bayer image is subjected to white balance correction processing through the white balance correction module according to the obtained gray edge white balance coefficient and is output to the demosaicing module; the demosaicing module carries out demosaicing processing and outputs a demosaiced image;
on one hand, the output demosaiced image is subjected to gray world algorithm sequentially through a gray world statistical module and a white balance coefficient calculation module to obtain gray world white balance coefficients GSgainR and GSgainB, and the gray world white balance coefficients GSgainR and GSgainB are output to the white balance coefficient calculation module;
the white balance coefficient calculation module obtains a white balance coefficient finally used for image correction by utilizing a gray world white balance coefficient obtained by the obtained gray edge white balance coefficient constraint, and outputs the white balance coefficient to the white balance correction module; the method comprises the following specific steps: setting two white balance threshold parameters of limit1 and limit2 (the values of limit1 and limit2 can be determined by debugging of a person skilled in the art), firstly calculating a difference value absgainR between GEgainR and GSgainR, namely absgainR = | GEgainR-GSgainR |, and if absgainR is not greater than limit1, indicating that a white balance coefficient of a gray world is close to a white balance coefficient of a gray edge, adopting a white balance accurate solution obtained by a gray world algorithm; if the absgainR is greater than limit2, indicating that the gray world white balance coefficient is far away from the gray edge white balance coefficient, then obtaining a white balance solution by using a gray edge algorithm; and if the other conditions are in the critical region, taking the weighted average of the two as the solution of white balance by adopting the following formula:
wherein, gainR _ new and gainB _ new are the r and b channel white balance coefficients finally used for correction, and the white balance coefficient of the g channel is alwaysIs set to 1;
the white balance correction module performs white balance correction processing on the Bayer image again according to a white balance coefficient finally used for image correction and outputs the Bayer image to the demosaicing module; the output demosaicing module outputs the image to a color image processing unit; the color image processing unit processes the image and outputs the image to a compression/display/storage device or the like.
Preferably, the gray edge algorithm performed by the gray edge statistical module and the white balance coefficient calculation module adopts a gray edge algorithm based on an image block gradient or a gray edge algorithm based on image transverse mean smooth down-sampling and transverse first-order difference to obtain a Bayer image illumination estimation value e, and then gray edge white balance coefficients GEgainR and GEgainB are obtained according to the Bayer image illumination estimation value e, so that gray edge constraint is obtained.
Preferably, the gray world algorithm performed by the gray world statistics module and the white balance coefficient calculation module adopts the following method:
firstly, the number of white points of each frame of the demosaiced mosaic image and the accumulated value sigma f of the white points r, g and b in each frame of the demosaiced mosaic image are calculatedr(x),∑fg(x),∑fb(x) The white point is considered when the following three conditions are simultaneously satisfied:
the conditions 1, 2 and 3 are sequentially from top to bottom; wherein condition 1 indicates that only when the value of the g channel of the pixel is between Gsmin and Gsmax, the point is counted to remove the influence of the extreme darkness and the extreme lightness; conditions 2 and 3 indicate that only if the absolute value of the difference between the r, b and g channel values of the pixel is less than GSration and fg(x) When the product of (a) and (b) is obtained, the point is considered as a white point; only when the three conditions are met simultaneously, the point is used for grey world statistics; the Gsmin, Gsmax and GSration are defined by those skilled in the artDebugging and determining;
when the number of white dots of one frame of image exceeds a set threshold value, the statistical result of the frame of image can be used for calculating a gray world white balance coefficient; averaging the results of the multiple frames of images to obtain a final output gray world white balance coefficient, wherein the formula for calculating the gray world white balance coefficient is as follows:
and the gainR and gainB are white balance coefficients currently adopted by the r channel and the b channel to obtain the white balance coefficients GSgainR and GSgainB of the gray world.
During operation, as shown in fig. 1, after an image is collected by the image sensor, an optical signal is converted into an electrical signal, and the electrical signal is transmitted to the Bayer image processing unit in a Bayer image mode to output a Bayer image; step (I) of the above method is carried out.
One path of the output Bayer image passes through a gray edge statistical module and a white balance coefficient calculation module and then passes through a gray edge algorithm to obtain a gray edge white balance coefficient (step (II) for realizing the method), and one path of the gray edge white balance coefficient is output to a white balance correction module, and the other path of the gray edge white balance coefficient is output to a white balance coefficient calculation module; the other path of the output Bayer image passes through a white balance correction module, gray edge white balance correction processing is carried out according to an input gray edge white balance coefficient, then the image is output to a demosaicing module, and the Bayer image is converted into RGB through a color image interpolation algorithm; step (iii) of carrying out the above method.
Obtaining a gray world white balance coefficient (realizing the step (IV) of the method) by passing one path of an output image of the demosaicing module through a gray world statistical module and a white balance coefficient calculation module through a gray world algorithm, and outputting the gray world white balance coefficient to the white balance coefficient calculation module; the other path of the output image of the demosaicing module is sequentially output to the color image processing unit and the compression/display/storage device.
The white balance coefficient calculation module firstly uses the gray edge white balance coefficient input by the gray edge statistical module as gray edge constraint, limits the space of the solution of the gray world white balance coefficient input by the gray world statistical module within the range constrained by the gray edge, and then calculates an accurate solution in the limited solution space to obtain the white balance coefficient finally used for image correction; outputting the white balance coefficient finally used for image correction to a white balance correction module; step (v) of carrying out the above method.
And finally, the Bayer image output by the step (I) in the method finally passes through a white balance correction module, a demosaicing module, a color image processing unit and a compression/display/storage device according to a white balance coefficient finally used for image correction, so as to realize the step (VI) of the method.
The method is reasonable in design, and solves the problems that the existing gray world algorithm has high accuracy in most scenes, but is unstable in partial scenes (such as large-area monochromatic objects), and the gray edge algorithm has high robustness but has low accuracy.
Detailed Description
The following detailed description of specific embodiments of the invention refers to the accompanying drawings.
Example 1
A white balance correction image processing method based on gray edge constraint gray world comprises the following steps:
(I), after gathering the image by image sensor, convert light signal into the signal of telecommunication to Bayer image processing unit is given to Bayer image mode, and it has mainly contained processing such as black level, bad point and denoising, output Bayer image.
Inputting the Bayer image output in the step (I) into a gray edge statistical module and a white balance coefficient calculation module on one hand, and performing a gray edge algorithm to obtain gray edge white balance coefficients GEgainR and GEgainB;
firstly, obtaining a Bayer image illumination estimation value e through a gray edge algorithm based on image block gradient or a gray edge algorithm based on image transverse mean smooth down-sampling and transverse first-order difference; the method comprises the following specific steps:
gray edge algorithm based on image block gradient:
firstly, uniformly dividing an image into Bw Bh blocks, wherein the size of each block is s, and averaging all pixels inside each block to obtain a pixel value
Thus obtaining a small image with the size of Bw Bh; then, calculating a second-order gradient of the image by adopting a Discrete Laplacian operator (Discrete Laplacactor) in the following formula (1), and calculating the average gradient of each channel, thereby obtaining a Bayer image illumination estimation value e;
wherein,
representing a small image of the c-channel image after s-s block averaging operation; lap is a Discrete Laplacian (Discrete Laplacian operator):
the gradient calculation is based on the result of the image block averaging, which hides an assumption: the mean of the differences of the average reflections of the neighboring blocks of all physical surfaces in the scene is achromatic (grey); in addition, the block averaging operation of an image also has several intuitive meanings: firstly, block averaging of an image is smooth denoising of the image, and the smooth denoising of the image is proved to be an important preprocessing process capable of improving the robustness of a white balance algorithm, for example, a general gray world method and a gray edge method both adopt Gaussian convolution to smooth the image; secondly, after the image is subjected to block averaging, the image size is only 1/(s × s) of the original image, so that the calculation amount of subsequent image processing is reduced.
Gray edge algorithm based on image transverse mean smooth down-sampling and transverse first-order difference:
this algorithm addresses two important steps in the gray edge algorithm, according to the nature of the gray edge assumption: and Gaussian smoothing and high-order derivative calculation, and a simplified algorithm based on image transverse mean smooth down-sampling and transverse first-order difference is provided. For gaussian smoothing, the essence is to perform high frequency filtering, which can be replaced by other high frequency filtering methods. Obviously, the simplest filtering method is mean filtering. Considering the characteristic of line-by-line transmission of image data of an image sensor, if horizontal and vertical bidirectional mean filtering is carried out, multiple lines of images must be buffered. In order to meet the real-time requirement, the method only adopts transverse mean filtering. However, since no longitudinal filtering is employed, the filtering effect will be reduced. In order to further filter high-frequency information, a filtering method of horizontal mean smooth down-sampling is adopted in consideration that image down-sampling is also a high-frequency removing method.
In general, the image after mean smoothing is divided by the same smoothing template size N to normalize the pixel values to a range of 0 to 255 (when the pixel depth is 8 bits). However, the white balance algorithm only needs to obtain the final illumination color estimate and does not need to obtain the complete normalized smooth image, so the division operation can be omitted to simplify the algorithm.
Setting an N-order down-sampling filtering template as follows:
the image after the horizontal mean smooth down-sampling is
Wherein, N belongs to [1, width (f (m, N)) ], namely the down-sampling template is larger than 1 and smaller than the image width;
for the derivation of the gradient, it is necessary to differentiate in the transverse direction and the longitudinal direction, even in the oblique direction. Similarly, in order to achieve the effect of solving the gradient of the image without buffering the image lines, a gradient solving mode of horizontal first-order difference is adopted, and the difference template is as follows:
T=[1 -1]
the difference image is a convolution of the down-sampled image and the difference template
For minkowski norm p values, to further reduce the computational complexity and thus the multiplication and squaring operations, the p value is taken to be 1, i.e. no minkowski norm is introduced;
when estimating the illumination color, it is generally necessary to find the color mean of the differential image. At this time, all three channels of RGB need to be divided by the same number of image pixels ^ dx. In fact, if division is not performed, only the values of α, β, γ of the illumination color (let the illumination color e = (α, β, γ)) are linearly scaled up proportionally, but the proportional relationship between them does not change. Also to simplify the algorithm, the division in illumination estimation can be omitted. Finally, it can be derived that the illumination estimate of the scene is:
wherein,
thereby obtaining the Bayer image illumination estimated value e.
Then, according to the illumination estimation value e = [ e ]r,eg,eb]TCorrection of Bayer images to standard illumination Output image f' (x):
then the gray edge white balance coefficient GEgainR is equal to
For the same reason, GegainB is equal to
I.e. a gray edge constraint is obtained.
And (III) on the other hand, the Bayer image output in the step (I) is subjected to white balance correction processing through a white balance correction module according to the gray edge white balance coefficient obtained in the step (II), demosaicing processing is performed through a demosaicing module, and the Bayer image is converted into RGB through a color image interpolation algorithm.
(IV) sequentially passing the image output in the step (III) through a grey world statistical module and a white balance coefficient calculation module to perform grey world algorithm to obtain grey world white balance coefficients GSgainR and GSgainB;
the method comprises the following specific steps:
firstly, the number of white points of each frame of the output image and the accumulated value sigma f of the white points r, g and b in each frame of the image are calculatedr(x),∑fg(x),∑fb(x) The white point is considered when the following three conditions are simultaneously satisfied:
the conditions 1, 2 and 3 are sequentially from top to bottom; wherein condition 1 indicates that only when the value of the g channel of the pixel is between Gsmin and Gsmax, the point is counted to remove the influence of the extreme darkness and the extreme lightness; conditions 2 and 3 indicate that only if the absolute value of the difference between the r, b and g channel values of the pixel is less than GSration and fg(x) When the product of (a) and (b) is obtained, the point is considered as a white point; only when the three conditions are met simultaneously, the point is used for grey world statistics; the Gsmin, Gsmax and GSration are determined by the person skilled in the art, and can generally take empirical values: gsmin =10, Gsmax =250, and GSration = 0.1.
When the number of white dots of one frame of image exceeds a set threshold value, the statistical result of the frame of image can be used for calculating a gray world white balance coefficient; averaging the results of the multiple frames of images to obtain a final output gray world white balance coefficient, wherein the formula for calculating the gray world white balance coefficient is as follows:
wherein, gainR, gainB are white balance coefficients currently adopted by r, b channels, and GSgainR, GSgainB are grey world white balance coefficients.
And (V) utilizing the gray edge white balance coefficient obtained in the step (II) to constrain the gray world white balance coefficient obtained in the step (IV), and obtaining a white balance coefficient finally used for image correction through a white balance coefficient calculation module:
the method comprises the following specific steps:
setting two white balance threshold parameters of limit1 and limit2 (the values of limit1 and limit2 are determined by debugging of a person skilled in the art, generally, the empirical values of limit1=0.1 and limit2= 0.3) are taken, firstly, calculating a difference value absgainR between GEgainR and GSgainR, namely absgainR = | GEgainR-GSgainR |, and if absgainR is not more than limit1, indicating that the white balance coefficient of the gray world is close to the white balance coefficient of the gray edge, then adopting a white balance accurate solution obtained by a gray world algorithm; if the absgainR is greater than limit2, indicating that the gray world white balance coefficient is far away from the gray edge white balance coefficient, then obtaining a white balance solution by using a gray edge algorithm; and if the other conditions are in the critical region, taking the weighted average of the two as the solution of white balance by adopting the following formula:
where gainR _ new, gainB _ new are the r, b-channel white balance coefficients finally used for correction, and the white balance coefficient of the g-channel is always set to 1.
(VI), according to the white balance coefficient obtained in the step (V), returning to the step (III) to sequentially perform white balance correction processing and demosaicing processing on the Bayer image again; then, after color image processing (mainly including color correction, γ correction, color space conversion, HDR, boundary enhancement, and the like) is performed, the image is input to a device such as compression/display/storage (for image display or image storage), and image processing is completed.
In specific implementation, as shown in fig. 1, step (i) is implemented by an image sensor and a Bayer image processing unit; the step (II) is realized by a gray edge statistical module and a white balance coefficient calculation module; the step (III) is realized by a white balance correction module and a demosaicing module; the step (IV) is realized by a grey world statistics module and a white balance coefficient calculation module; the step (V) is realized by a white balance coefficient calculation module; and (VI) the step is realized by a white balance correction module, a demosaicing module, a color image processing unit and a compression/display/storage device.
Example 2
As shown in fig. 1, a white balance correction image processing apparatus that constrains a gray world based on a gray edge includes:
an image sensor that outputs an image to a Bayer image processing unit in a Bayer image pattern; the Bayer image processing unit outputs a Bayer image;
on one hand, the output Bayer image is subjected to gray edge algorithm sequentially through a gray edge statistical module and a white balance coefficient calculation module to obtain gray edge white balance coefficients GEgainR and GEgainB, and the gray edge white balance coefficients GEgainR and GEgainB are output to a white balance correction module;
on the other hand, the output Bayer image is subjected to white balance correction processing through the white balance correction module according to the obtained gray edge white balance coefficient and is output to the demosaicing module; the demosaicing module carries out demosaicing processing and outputs a demosaiced image;
on one hand, the output demosaiced image is subjected to gray world algorithm sequentially through a gray world statistical module and a white balance coefficient calculation module to obtain gray world white balance coefficients GSgainR and GSgainB, and the gray world white balance coefficients GSgainR and GSgainB are output to the white balance coefficient calculation module;
the white balance coefficient calculation module obtains a white balance coefficient finally used for image correction by utilizing a gray world white balance coefficient obtained by the obtained gray edge white balance coefficient constraint, and outputs the white balance coefficient to the white balance correction module; the method comprises the following specific steps: setting two white balance threshold parameters of limit1 and limit2 (the values of limit1 and limit2 are determined by debugging of a person skilled in the art, generally, the empirical values of limit1=0.1 and limit2= 0.3) are taken, firstly, calculating a difference value absgainR between GEgainR and GSgainR, namely absgainR = | GEgainR-GSgainR |, and if absgainR is not more than limit1, indicating that the white balance coefficient of the gray world is close to the white balance coefficient of the gray edge, then adopting a white balance accurate solution obtained by a gray world algorithm; if the absgainR is greater than limit2, indicating that the gray world white balance coefficient is far away from the gray edge white balance coefficient, then obtaining a white balance solution by using a gray edge algorithm; and if the other conditions are in the critical region, taking the weighted average of the two as the solution of white balance by adopting the following formula:
wherein,
gainR _ new, gainB _ new are the r, b channel white balance coefficients finally used for correction, the white balance coefficient of the g channel is always set to 1;
the white balance correction module performs white balance correction processing on the Bayer image again according to a white balance coefficient finally used for image correction and outputs the Bayer image to the demosaicing module; the output demosaicing module outputs the image to a color image processing unit; the color image processing unit processes the image and outputs the image to a compression/display/storage device or the like.
In specific implementation, a gray edge algorithm performed by the gray edge statistical module and the white balance coefficient calculation module adopts a gray edge algorithm based on image block gradients or a gray edge algorithm based on image transverse mean smooth down-sampling and transverse first-order difference to obtain a Bayer image illumination estimation value e, and then gray edge white balance coefficients GEgainR and GEgainB are obtained according to the Bayer image illumination estimation value e; the method comprises the following specific steps:
gray edge algorithm based on image block gradient: firstly, uniformly dividing an image into Bw Bh blocks, wherein the size of each block is s, and averaging all pixels inside each block to obtain a pixel value
Thus obtaining a small image with the size of Bw Bh; then, calculating a second-order gradient of the image by adopting a discrete Laplacian operator in the following formula (1), and calculating the average gradient of each channel to obtain a Bayer image illumination estimation value e;
wherein,
representing a small image of the c-channel image after s-s block averaging operation; lap is a discrete Laplacian:
gray edge algorithm based on image transverse mean smooth down-sampling and transverse first-order difference: setting an N-order down-sampling filtering template as follows:
the image after the horizontal mean smooth down-sampling is
Wherein, N belongs to [1, width (f (m, N)) ], namely the down-sampling template is larger than 1 and smaller than the image width;
the gradient solving mode of transverse first-order difference is adopted, and the difference template is as follows:
T=[1 -1]
the difference image is a convolution of the down-sampled image and the difference template
For minkowski norm values, take the value of p to 1, i.e. no minkowski norm is introduced;
an illumination estimate for the scene can be derived as:
wherein,
thereby obtaining a Bayer image illumination estimation value e;
then, according to the illumination estimation value e = [ e ]r,eg,eb]TCorrection of Bayer images to standard illumination Output image f' (x):
then the gray edge white balance coefficient GEgainR is equal to
For the same reason, GegainB is equal to
I.e. a gray edge constraint is obtained.
In specific implementation, the gray world algorithm performed by the gray world statistics module and the white balance coefficient calculation module adopts the following method:
firstly, the number of white points of each frame of the demosaiced mosaic image and the accumulated value sigma f of the white points r, g and b in each frame of the demosaiced mosaic image are calculatedr(x),∑fg(x),∑fb(x) The white point is considered when the following three conditions are simultaneously satisfied:
the conditions 1, 2 and 3 are sequentially from top to bottom; wherein condition 1 indicates that only when the value of the g channel of the pixel is between Gsmin and Gsmax, the point is counted to remove the influence of the extreme darkness and the extreme lightness; conditions 2 and 3 indicate that only if the absolute value of the difference between the r, b and g channel values of the pixel is less than GSration and fg(x) When the product of (a) and (b) is obtained, the point is considered as a white point; only when the three conditions are met simultaneously, the point is used for grey world statistics; the Gsmin, Gsmax and GSration are determined by the person skilled in the art, and can generally take empirical values: gsmin =10, Gsmax =250, and GSration = 0.1.
When the number of white dots of one frame of image exceeds a set threshold value, the statistical result of the frame of image can be used for calculating a gray world white balance coefficient; averaging the results of the multiple frames of images to obtain a final output gray world white balance coefficient, wherein the formula for calculating the gray world white balance coefficient is as follows:
and the gainR and gainB are white balance coefficients currently adopted by the r channel and the b channel to obtain the white balance coefficients GSgainR and GSgainB of the gray world.
In specific implementation, the demosaicing module converts the Bayer image into RGB through a color image interpolation algorithm and outputs a demosaiced image.