CN115442573B - Image processing method and device and electronic equipment - Google Patents
Image processing method and device and electronic equipment Download PDFInfo
- Publication number
- CN115442573B CN115442573B CN202211013504.8A CN202211013504A CN115442573B CN 115442573 B CN115442573 B CN 115442573B CN 202211013504 A CN202211013504 A CN 202211013504A CN 115442573 B CN115442573 B CN 115442573B
- Authority
- CN
- China
- Prior art keywords
- image
- pixel
- pixels
- white channel
- channel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 28
- 238000012545 processing Methods 0.000 claims abstract description 59
- 238000012937 correction Methods 0.000 claims abstract description 42
- 230000004927 fusion Effects 0.000 claims abstract description 20
- 238000000034 method Methods 0.000 claims description 31
- 230000003247 decreasing effect Effects 0.000 claims description 10
- 238000012986 modification Methods 0.000 claims description 8
- 230000004048 modification Effects 0.000 claims description 8
- 238000010586 diagram Methods 0.000 description 10
- 238000004422 calculation algorithm Methods 0.000 description 6
- 238000004590 computer program Methods 0.000 description 4
- 230000007547 defect Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003702 image correction Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
Landscapes
- Image Processing (AREA)
Abstract
The embodiment of the application provides an image processing method, an image processing device and electronic equipment, relates to the technical field of image processing, and can improve the problem that a color image obtained by demosaicing an image of a CFA added with W is easy to generate pseudo color and zipper noise. The image processing method comprises the following steps: acquiring an image from an image sensor; combining two color pixels in each sub-pixel block in the image into a color channel pixel, and combining two white pixels in each sub-pixel block in the image into a white channel pixel; splitting an image into a white channel image and a color channel image, the white channel image comprising all white channel pixels and the color channel image comprising all color channel pixels; correcting the white channel image to obtain a white channel corrected image; and performing image fusion on the white channel correction image and the color channel image to obtain a demosaiced image.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing device, and an electronic device.
Background
With the development of electronic devices such as mobile phones, users have a higher demand for capturing high-quality pictures and video contents through the electronic devices. But are limited by cost, power consumption, and size, image sensors on electronic devices often rely on a specific filter array (Color FILTER ARRAY, CFA) to obtain limited image information. It is therefore necessary to design interpolation algorithms for CFA to restore high quality images. A common CFA adopts a bayer pattern arrangement, and is composed of three types of color filters of specific wavelengths of red (R), green (G), and blue (B). However, the signal-to-noise ratio of the image sensor corresponding to the CFA using the bayer pattern is low, so that more and more image sensors incorporate a white pixel (W) filter into the CFA, and since the white pixel can widely transmit visible light, the light input amount can be increased, the signal-to-noise ratio can be improved, and the sensitivity can be improved.
However, currently, demosaicing is performed on an image of CFA added with W, and the resulting color image is prone to pseudo color and zipper noise.
Disclosure of Invention
An image processing method, apparatus and electronic device are capable of improving the problem that a color image obtained by performing demosaicing processing on an image of a CFA added with W is liable to generate false color and zipper noise.
In a first aspect, there is provided an image processing method including: acquiring an image from an image sensor, wherein the image comprises a plurality of repeating units arranged in a plurality of rows and a plurality of columns, each repeating unit comprises four pixel blocks arranged in 2 rows and 2 columns, each pixel block comprises sub-pixel blocks arranged in m rows and n columns, m is more than or equal to 1, n is more than or equal to 1, each sub-pixel block comprises two white pixels arranged in a first diagonal direction and two color pixels arranged in a second diagonal direction, the color pixels in the two pixel blocks arranged in the first diagonal direction in the four pixel blocks are green pixels, and the color pixels in the two pixel blocks arranged in the second diagonal direction in the four pixel blocks are blue pixels and red pixels respectively; combining two color pixels in each sub-pixel block in the image into a color channel pixel, and combining two white pixels in each sub-pixel block in the image into a white channel pixel; splitting an image into a white channel image and a color channel image, the white channel image comprising all white channel pixels and the color channel image comprising all color channel pixels; correcting the white channel image to obtain a white channel corrected image; correcting the white channel image includes: increasing the blur degree of at least part of the white channel pixels in the white channel image in the first diagonal direction and/or decreasing the sharpness of at least part of the white channel pixels in the white channel image in the second diagonal direction; and performing image fusion on the white channel correction image and the color channel image to obtain a demosaiced image.
In one possible implementation, the process of increasing the blur degree of at least part of the white channel pixels in the white channel image in the first diagonal direction and/or decreasing the sharpness of at least part of the white channel pixels in the white channel image in the second diagonal direction includes: calculating a first gradient in a first diagonal direction and a second gradient in a second diagonal direction for each white channel pixel in the white channel image; and for the white channel pixels with the ratio of the first gradient to the second gradient meeting the preset condition, if the first gradient is smaller than the second gradient, performing blurring correction processing, and if the first gradient is larger than the second gradient, performing sharpening correction processing.
In one possible implementation, the first gradient is the sum of absolute values of differences between every two adjacent white channel pixels in a first diagonal direction in a neighborhood centered on the current white channel pixel; the second gradient is the sum of absolute values of differences between every two adjacent white channel pixels in the second diagonal direction in the neighborhood centered on the current white channel pixel.
In one possible implementation, the neighborhood is a white channel pixel arranged in 5 rows and 5 columns.
In one possible embodiment, the ratio of the first gradient to the second gradient satisfies a preset condition: the maximum value of the first ratio and the second ratio is larger than a first preset value, the first ratio is the ratio of the first gradient to the second gradient, and the second ratio is the ratio of the second gradient to the first gradient.
In one possible embodiment, the first preset value is greater than 5 and less than 8.
In one possible implementation manner, for a white channel pixel whose ratio of the first gradient and the second gradient satisfies a preset condition, if the first gradient is smaller than the second gradient, performing the blur correction processing, and if the first gradient is greater than the second gradient, performing the sharpening correction processing includes: for the current white channel pixel, if the maximum value of the first ratio and the second ratio is greater than the first preset value and the first gradient is not equal to the second gradient, modifying the pixel value of the current white channel pixel according to the following formula: w' 1=w1+β(w6+w8-w2-w4); wherein w' 1 is the pixel value of the modified current white channel pixel, w 1 is the pixel value of the current white channel pixel before modification, w 2 and w 4 are the pixel values of the white channel pixels adjacent to the current white channel pixel in the first diagonal direction and respectively located at both sides of the current white channel pixel, and w 6 and w 8 are the pixel values of the white channel pixels adjacent to the current white channel pixel in the second diagonal direction and respectively located at both sides of the current white channel pixel, 0< beta <1.
In one possible embodiment, 0.3< β <0.6.
In one possible embodiment, before the process of performing image fusion on the white channel correction image and the color channel image to obtain the demosaiced image, the method further includes: and for the modified current white channel pixel, if the maximum value of the third ratio and the fourth ratio is larger than the second preset value, recovering the pixel value of the current white channel pixel to be w 1, wherein the third ratio is the ratio of w '1 to w 1, and the fourth ratio is the ratio of w 1 to w' 1.
In one possible embodiment, the second preset value is greater than 1.4 and less than 1.6.
In a second aspect, there is provided an image processing apparatus comprising: an image acquisition unit configured to acquire an image from an image sensor, the image including a plurality of repeating units arranged in a plurality of rows and a plurality of columns, each repeating unit including four pixel blocks arranged in 2 rows and2 columns, each pixel block including sub-pixel blocks arranged in m rows and n columns, m being 1 or more, n being 1 or more, each sub-pixel block including two white pixels arranged in a first diagonal direction and two color pixels arranged in a second diagonal direction, color pixels in two pixel blocks arranged in the first diagonal direction in the four pixel blocks being green pixels, and color pixels in two pixel blocks arranged in the second diagonal direction in the four pixel blocks being blue pixels and red pixels, respectively; a merging unit, configured to merge two color pixels in each sub-pixel block in the image into a color channel pixel, and merge two white pixels in each sub-pixel block in the image into a white channel pixel; a splitting unit for splitting the image into a white channel image and a color channel image, the white channel image comprising all white channel pixels and the color channel image comprising all color channel pixels; the correction unit is used for correcting the white channel image to obtain a white channel corrected image; correcting the white channel image includes: increasing the blur degree of at least part of the white channel pixels in the white channel image in the first diagonal direction and/or decreasing the sharpness of at least part of the white channel pixels in the white channel image in the second diagonal direction; and the fusion unit is used for carrying out image fusion on the white channel correction image and the color channel image to obtain a demosaiced image.
In a third aspect, there is provided an image processing apparatus comprising: the image processing device comprises a processor and a memory, wherein the memory is used for storing at least one instruction, and the instruction is loaded and executed by the processor to realize the image processing method.
In a fourth aspect, an electronic device is provided, including the image processing apparatus described above.
According to the image processing method, the device and the electronic equipment, aiming at the image of the CFA image sensor arranged by RGBW, the image is split into the white channel image and the color channel image based on a diagonal pixel combination mode, then the white channel image is corrected to improve the consistency of the white channel pixels and the color channel pixels in the diagonal direction, and then the image fusion is carried out based on the corrected image, so that the false color and zipper noise generated due to poor consistency of the white channel pixels and the color channel pixels in the diagonal direction are reduced; because the fused image defects mainly exist in the diagonal direction with poorer consistency of white channel pixels and color channel pixels, and other horizontal or vertical boundaries and stripe areas and flat areas have no obvious defects, the image correction process of the embodiment of the application can only aim at the diagonal direction and only depends on one type of channel obtained by diagonal merging and downsampling, the algorithm with poorer complexity value of the whole algorithm is reduced, the real-time performance is good, and the method can be applied to the processing of video images.
Drawings
FIG. 1 is a schematic diagram of a minimum repeating unit of an image corresponding to a CFA structure employing Bayer pattern;
FIG. 2 is a schematic diagram of a minimum repeating unit of a CFA-corresponding image using the Hexa-deca RGBW mode;
FIG. 3 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of pixel variation of the image corresponding to FIG. 3;
FIG. 5 is a schematic diagram of another image with minimal repeating units according to an embodiment of the application;
FIG. 6 is a schematic diagram of another image with minimal repeating units according to an embodiment of the application;
FIG. 7 is a schematic diagram of a plurality of minimal repeating unit arrangements of an image according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a white channel correction process of an image processing method according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a neighborhood of 5*5 centered on a current white channel pixel in an embodiment of the present application;
FIG. 10 is a schematic diagram of a white channel correction process of another image processing method according to an embodiment of the present application;
fig. 11 is a block diagram illustrating an image processing apparatus according to an embodiment of the present application.
Detailed Description
The terminology used in the description of the embodiments of the application herein is for the purpose of describing particular embodiments of the application only and is not intended to be limiting of the application.
Before describing the embodiments of the present application, first, description will be made of the related art, as shown in fig. 1, fig. 1 is a minimum repeating unit of an image corresponding to a CFA structure using a bayer pattern, which may be referred to as one pixel, with the development of miniaturization of an image sensor, and each pixel of the CFA of the bayer pattern can only transmit incident light of a specific wavelength, so that the signal-to-noise ratio of the image sensor corresponding to the CFA of the bayer pattern is low. Thus, a CFA such as that shown in fig. 2, where fig. 2 is the smallest repeating unit of an image corresponding to a CFA using the Hexa-deca RGBW pattern, is present, and since the sample rate of R, G, B component in the RGBW pattern is reduced to half that of the bayer pattern, it is necessary to fully use the information of the W component to fuse or guide the generation of the final color image. However, since the W component and the R, G, B color component are located at different sampling points, the W component is directly used for guiding and fusing to easily generate pseudo color and zipper noise, so that the technical scheme of the embodiment of the application is provided to solve the above technical problems, and the technical scheme of the embodiment of the application is described below.
As shown in fig. 3 and 4, an embodiment of the present application provides an image processing method, including:
Step 101 of acquiring an image from an image sensor, the image comprising a plurality of repeating units arranged in a plurality of rows and a plurality of columns, for example as shown in fig. 2, each repeating unit comprising four pixel blocks arranged in 2 rows and 2 columns, each pixel block comprising sub-pixel blocks arranged in m rows and n columns, m being equal to or greater than 1, n being equal to or greater than 1, for example in fig. 2 m=n=2, each sub-pixel block comprising two white pixels W arranged in a first diagonal direction a1 and two color pixels arranged in a second diagonal direction a2, the first diagonal direction a1 and the second diagonal direction a2 being two diagonal directions perpendicular to each other, for example, the first diagonal direction a1 being a direction rotated 45 ° counter-clockwise in the row direction, the second diagonal direction a2 being a direction rotated 135 ° counter-clockwise in the row direction, the color pixels in two pixel blocks arranged along the first diagonal direction a1 in the four pixel blocks are green pixels G, the color pixels in two pixel blocks arranged along the second diagonal direction a2 in the four pixel blocks are blue pixels B and red pixels R, respectively, that is, the smallest rectangular frame in fig. 2 is a pixel, adjacent four pixels arranged in 2 rows and 2 columns form a sub-pixel block, adjacent four sub-pixel blocks arranged in 2 rows and 2 columns in the upper left corner form a pixel block, the pixel block in the upper left corner is composed of white pixels W and blue pixels B, adjacent four sub-pixel blocks arranged in 2 rows and 2 columns in the upper right corner form a pixel block, the pixel block in the upper right corner is composed of white pixels W and green pixels G, the adjacent four sub-pixel blocks arranged in 2 rows and 2 columns in the lower right corner form a pixel block in the lower right corner is composed of white pixels W and red pixels R, the pixel block is formed by four adjacent sub-pixel blocks which are arranged in 2 rows and 2 columns at the lower left corner, the pixel block at the lower left corner is formed by white pixels W and green pixels G, the repeating unit is formed by 8 rows and 8 columns of pixels, each pixel block is formed by 4 rows and 4 columns of pixels, each sub-pixel block is formed by 2 rows and 2 columns of pixels, any two adjacent pixels are respectively white pixels and color pixels in the row direction and the column direction, the pixel arrangement in the repeating unit shown in fig. 2 is only an example, in other possible embodiments, other pixel arrangement modes can be adopted, for example, fig. 5 is a repeating unit of another pixel arrangement, the structure shown in fig. 5 is a structure after mirror image inversion in the row direction, for example, fig. 6 is a repeating unit of another pixel arrangement, the structure shown in fig. 6 is a structure in which each pixel block is added by 1 row and 1 column of sub-pixel blocks, the structure shown in fig. 2 is a structure, the structure shown in fig. 2 is a repeating unit of another pixel arrangement, and the structure is a repeating unit in the image is only schematic illustration, and the image is a repeating unit in the embodiment in which the image is formed by a plurality of the repeating unit in the embodiment, and the image arrangement is a repeating unit in the embodiment, and the image is in the image arrangement of the image sensor is in the embodiment, and the image sensor is in the image sensor, and the image sensor is in the image;
Step 102, merging two color pixels in each sub-pixel block in the image into a color channel pixel, and merging two white pixels in each sub-pixel block in the image into a white channel pixel, that is, merging diagonal pixels in each sub-pixel block, for example, in the structure shown in fig. 2, a repeating unit is composed of 8 rows and 8 columns of pixels, and after merging diagonal pixels in each sub-pixel block, an image composed of, for example, 4 rows and 8 columns of pixels can be formed;
Step 103, splitting the image into a white channel image and a color channel image, wherein the white channel image comprises all white channel pixels W, the color channel image comprises all color channel pixels R, G, B, and the resolutions of the white channel image and the color channel image are half of the resolution of the original image so as to facilitate the subsequent operation;
104, correcting the white channel image to obtain a white channel corrected image;
Correcting the white channel image includes: increasing the blur degree of at least part of the white channel pixels W in the white channel image in the first diagonal direction a1 and/or decreasing the sharpness of at least part of the white channel pixels W in the white channel image in the second diagonal direction a 2;
step 105, performing image fusion on the white channel correction image and the color channel image to obtain a demosaiced image, wherein the image fusion process can utilize the relationship between channels such as color difference or color ratio to perform fusion so as to obtain a final R, G, B color image.
Specifically, the image in the embodiment of the present application comes from the image processor using the Hexa-deca RGBW pattern CFA, since the white pixel and the color pixel are located at different positions, respectively, and in step 102, the white pixel and the color pixel are combined in two directions by means of diagonal pixel combination, which may result in a large difference in consistency between the white pixel and the color pixel in different diagonal directions, so in step 104, for at least one white channel pixel W in the white channel image, pixel value correction is performed on the white channel pixel W based on the diagonal direction, as can be seen from fig. 2, the white pixel is sharper in the first diagonal direction a1 than the color pixel, and the white pixel is smoother in the second diagonal direction a2 than the color pixel, in step 104, for the white channel pixel W, processing in the first diagonal direction a1 and the processing in the second diagonal direction a2 may result in a high consistency between the white channel pixel W and the color channel pixel W, and the color channel image may be corrected in a poor consistency between the white channel and the color channel image, and the color channel image may be corrected in the subsequent diagonal directions, as can be seen from fig. 2.
That is, in the embodiment of the present application, for an image from an image sensor employing CFA of RGBW arrangement, white pixels are merged into white channel pixels, color pixels are merged into color channel pixels based on a diagonal pixel merging manner, and the image is split into a white channel image composed of white channel pixels and a color channel image composed of color channel pixels, and then the white channel image is corrected, so that after correction, consistency of the white channel pixels and the color channel pixels in a diagonal direction is improved, and then image fusion is performed.
According to the image processing method, aiming at the image of the image sensor adopting the CFA with RGBW arrangement, the image is split into the white channel image and the color channel image based on a diagonal pixel merging mode, then the white channel image is corrected to improve the consistency of the white channel pixels and the color channel pixels in the diagonal direction, and then the image fusion is carried out based on the corrected image, so that the false color and zipper noise generated due to poor consistency of the white channel pixels and the color channel pixels in the diagonal direction are reduced; because the fused image defects mainly exist in the diagonal direction with poorer consistency of white channel pixels and color channel pixels, and other horizontal or vertical boundaries and stripe areas and flat areas have no obvious defects, the image correction process of the embodiment of the application can only aim at the diagonal direction and only depends on one type of channel obtained by diagonal merging and downsampling, the algorithm with poorer complexity value of the whole algorithm is reduced, the real-time performance is good, and the method can be applied to the processing of video images.
In a possible implementation manner, as shown in fig. 8, in step 104, the process of increasing the ambiguity of at least part of the white channel pixels in the white channel image in the first diagonal direction a1 and/or decreasing the sharpness of at least part of the white channel pixels in the white channel image in the second diagonal direction a2 includes:
Step 1041, calculating, for each white channel pixel in the white channel image, a first gradient G1 in a first diagonal direction a1 and a second gradient G2 in a second diagonal direction a 2;
step 1042, judging whether the ratio of the first gradient G1 to the second gradient G2 meets the preset condition for the current white channel pixel, if yes, entering step 1043, if not, entering step 1044, keeping the pixel value of the current white channel pixel unchanged, and then executing step 1042 based on the next white channel pixel until all white channel pixels in the white channel image are processed, thus obtaining a white channel correction image;
Step 1043, determining the magnitude relation between the first gradient G1 and the second gradient G2, if the first gradient G1 is smaller than the second gradient G2, entering step 1045, performing blur correction processing, then executing step 1042 based on the next white channel pixel until all white channel pixels in the white channel image are processed, thus obtaining a white channel corrected image, if the first gradient G1 is larger than the second gradient G2, entering step 1046, performing sharpening correction processing, then executing step 1042 based on the next white channel pixel until all white channel pixels in the white channel image are processed, thus obtaining a white channel corrected image, if the first gradient G1 is equal to the second gradient G2, entering step 1044, that is, for the white channel pixels whose ratio of the first gradient G1 and the second gradient G2 satisfies the preset condition, if the first gradient is smaller than the second gradient, performing blur correction processing, and if the first gradient is larger than the second gradient, performing sharpening correction processing.
In one possible implementation, the first gradient G1 is the sum of absolute values of differences between every two adjacent white channel pixels in the first diagonal direction a1 in the neighborhood centered on the current white channel pixel; the second gradient G2 is the sum of absolute values of differences between every two adjacent white channel pixels in the second diagonal direction a2 in the neighborhood centered on the current white channel pixel. A neighborhood is for example a white channel pixel arranged in 5 rows and 5 columns.
Specifically, for example, for each white channel pixel, step 1042 is performed to implement the above correction process, where for each white channel pixel, it is taken as the current white channel pixel, and a 5*5 neighborhood centered on the current white channel pixel is determined, as shown in fig. 9, fig. 9 is a schematic view of a 5*5 centered on the current white channel pixel, in which 5 rows and 5 columns of white channel pixels are illustrated, the white channel pixel at the center position is the current white channel pixel W1, the white channel pixels in the first diagonal direction a1 are W5, W4, W1, W2, and W3 in order, and the white channel pixels in the second diagonal direction a2 are W9, W8, W1, W6, and W7 in order, and the first gradient G1 and the second gradient G2 corresponding to W1 can be calculated based on the following formulas:
G1=ABS(w1-w2)+ABS(w1-w4)+ABS(w2-w3)+ABS(w4-w5) (1)
G2=ABS(w1-w6)+ABS(w1-w8)+ABS(w6-w7)+ABS(w8-w9) (2)
ABS is an absolute value calculation symbol, W 1、w2、w3、w4、w5、w6、w7、w8、w9 is the pixel value corresponding to W1, W2, W3, W4, W5, W6, W7, W8, and W9, respectively. The structure shown in fig. 9 is a part of the split white channel image. In addition, for the white channel pixels at the edge of the white channel image, a neighborhood of 5*5 centered on the white channel pixels may not be formed, and in this case, a processing manner is to add two rows of white channel pixels for calculation on the left side, the right side, the lower side and the upper side of the white channel image based on a preset rule before calculating the first gradient G1 and the second gradient G2, for example, before step 1042, so that the white channel pixels originally located at the edge of the image may also be used to calculate the corresponding gradients by the above method.
In one possible embodiment, as shown in fig. 10, the ratio of the first gradient G1 to the second gradient G2 satisfies the preset condition: the maximum value alpha of the first ratio D1 and the second ratio D2 is larger than a first preset valueThe first ratio D1 is a ratio of the first gradient G1 to the second gradient G2, and the second ratio D2 is a ratio of the second gradient G2 to the first gradient G1. That is, in step 1042, α is calculated by the following formula:
Where MAX represents the calculated symbol taking the maximum value. Then judging whether alpha is larger than a first preset value beta, executing the subsequent steps according to the judging result, wherein alpha is larger than Indicating that the preset condition is satisfied, alpha is not more than/>Indicating that the non-preset condition is satisfied.
In one possible embodiment, the first preset valueGreater than 5 and less than 8.
In one possible implementation manner, for a white channel pixel whose ratio of the first gradient G1 to the second gradient G2 satisfies a preset condition, if the first gradient G1 is smaller than the second gradient G2, the blurring correction process is performed, and if the first gradient G1 is larger than the second gradient G2, the sharpening correction process includes: for the current white channel pixel W1, if the maximum value α of the first ratio D1 and the second ratio D2 is greater than the first preset valueAnd the first gradient G1 is not equal to the second gradient G2, step 201 is entered to modify the pixel value of the current white channel pixel W1 according to the following formula: w' 1=w1+β(w6+w8-w2-w4); where W' 1 is the pixel value of the modified current white channel pixel W1, W 1 is the pixel value of the modified current white channel pixel W1, W 2 and W 4 are the pixel values of the white channel pixels adjacent to the current white channel pixel W1 in the first diagonal direction a1 and located on both sides of the current white channel pixel W1, for example, W 2 and W 4 are the pixel values corresponding to W2 and W4 in fig. 9, respectively, W 6 and W 8 are the pixel values of the white channel pixels adjacent to the current white channel pixel W1 in the second diagonal direction a2 and located on both sides of the current white channel pixel W1, for example, W 6 and W 8 are the pixel values corresponding to W6 and W8 in fig. 9, respectively, and 0< β <1.
Specifically, the calculation formula of w' 1 is obtained by the following formula:
g1=2w1-w2-w4 (4)
g2=2w1-w6-w8 (5)
w′1=w1+β(g1-g2) (6)
If G1 corresponding to the current white channel pixel W1 is smaller than G2, the calculated G1 can be ignored, W '1=w1 +beta (G1-G2) can be understood as the product of G2 and beta is subtracted to achieve the blurring effect, and if G1 is larger than G2, the calculated G2 can be ignored, and W' 1=w1 +beta (G1-G2) can be understood as the product of G1 and beta is added to achieve the sharpening effect.
In one possible embodiment, 0.3< β <0.6.
In one possible implementation manner, in order to enhance the robustness of the image processing method, whether the pixel value of the modified white channel pixel deviates too much or not may be determined by the following method, and if the pixel deviating too much, the modification abnormality is described, the modification is discarded and the pixel value before the modification is restored. Specifically, before the process of performing image fusion on the white channel correction image and the color channel image to obtain the demosaiced image, the method further comprises: for the modified current white channel pixel, the maximum γ of the third ratio D3 and the fourth ratio D4 is calculated by the following formula:
After step 201, step 202 is performed to determine whether the maximum value γ of the third ratio D3 and the fourth ratio D4 is greater than the second preset value θ, if yes, that is, if the maximum value γ of the third ratio D3 and the fourth ratio D4 is greater than the second preset value θ, it is indicated that the pixel value of the modified current white channel pixel W1 deviates too much, step 203 is performed to restore the pixel value of the current white channel pixel W1 to W 1, then the processing of the next white channel pixel is performed until all the white channel pixels are processed, if no, that is, if the maximum value γ of the third ratio D3 and the fourth ratio D4 is not greater than the second preset value θ, it is indicated that the pixel value of the modified current white channel pixel W1 is not abnormal, then the pixel value of the current white channel pixel W1 is kept to W' 1, and the processing of the next white channel pixel is performed until all the white channel pixels are processed.
In one possible embodiment, the second preset value θ is greater than 1.4 and less than 1.6.
In addition, the above embodiment provides a specific method for performing pixel value correction based on a diagonal gradient, and in addition, the pixel value correction may be performed by other algorithms, for example, a method based on machine learning, so long as the consistency of the white channel pixels and the color channel pixels in the diagonal direction can be improved.
As shown in fig. 11, an embodiment of the present application further provides an image processing apparatus, including: an image acquisition unit 1, configured to acquire an image from an image sensor, where the image includes a plurality of repeating units arranged in a plurality of rows and a plurality of columns, each repeating unit includes four pixel blocks arranged in 2 rows and 2 columns, each pixel block includes a sub-pixel block arranged in m rows and n columns, m is greater than or equal to 1, n is greater than or equal to 1, each sub-pixel block includes two white pixels arranged in a first diagonal direction and two color pixels arranged in a second diagonal direction, color pixels in two pixel blocks arranged in the first diagonal direction in the four pixel blocks are all green pixels, and color pixels in two pixel blocks arranged in the second diagonal direction in the four pixel blocks are respectively blue pixels and red pixels; a merging unit 2, configured to merge two color pixels in each sub-pixel block in the image into a color channel pixel, and merge two white pixels in each sub-pixel block in the image into a white channel pixel; a splitting unit 3 for splitting the image into a white channel image and a color channel image, the white channel image comprising all white channel pixels and the color channel image comprising all color channel pixels; a correction unit 4 for correcting the white channel image to obtain a white channel corrected image; correcting the white channel image includes: increasing the blur degree of at least part of the white channel pixels in the white channel image in the first diagonal direction and/or decreasing the sharpness of at least part of the white channel pixels in the white channel image in the second diagonal direction; and the fusion unit 5 is used for carrying out image fusion on the white channel correction image and the color channel image to obtain a demosaiced image.
The specific process and principle of the image processing apparatus may be the same as those of the above embodiment, and will not be described herein.
It should be understood that the above division of the image processing apparatus is merely a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated. And these modules may all be implemented in software in the form of calls by the processing element; or can be realized in hardware; it is also possible that part of the modules are implemented in the form of software called by the processing element and part of the modules are implemented in the form of hardware. For example, any one of the image acquisition unit, the merging unit, the splitting unit, the correction unit, and the fusion unit may be a processing element that is set up separately, may be integrated in the image processing apparatus, for example, may be implemented in a chip of the image processing apparatus, or may be stored in a memory of the image processing apparatus in a program form, and the functions of the above respective modules may be called and executed by a processing element of the image processing apparatus. The implementation of the other modules is similar. In addition, all or part of the modules can be integrated together or can be independently implemented. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form.
For example, the modules of the image acquisition unit, the merging unit, the splitting unit, the correction unit and the fusion unit may be one or more integrated circuits configured to implement the above method, for example: one or more Application SPECIFIC INTEGRATED Circuits (ASIC), or one or more microprocessors (DIGITAL SINGNAL processor, DSP), or one or more field programmable gate arrays (Field Programmable GATE ARRAY, FPGA), etc. For another example, when a module above is implemented in the form of a processing element scheduler, the processing element may be a general purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may invoke a program. For another example, the modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
In one possible implementation, the process of increasing the blur degree of at least part of the white channel pixels in the white channel image in the first diagonal direction and/or decreasing the sharpness of at least part of the white channel pixels in the white channel image in the second diagonal direction includes: calculating a first gradient in a first diagonal direction and a second gradient in a second diagonal direction for each white channel pixel in the white channel image; and for the white channel pixels with the ratio of the first gradient to the second gradient meeting the preset condition, if the first gradient is smaller than the second gradient, performing blurring correction processing, and if the first gradient is larger than the second gradient, performing sharpening correction processing.
In one possible implementation, the first gradient is the sum of absolute values of differences between every two adjacent white channel pixels in a first diagonal direction in a neighborhood centered on the current white channel pixel; the second gradient is the sum of absolute values of differences between every two adjacent white channel pixels in the second diagonal direction in the neighborhood centered on the current white channel pixel.
In one possible implementation, the neighborhood is a white channel pixel arranged in 5 rows and 5 columns.
In one possible embodiment, the ratio of the first gradient to the second gradient satisfies a preset condition: the maximum value of the first ratio and the second ratio is larger than a first preset value, the first ratio is the ratio of the first gradient to the second gradient, and the second ratio is the ratio of the second gradient to the first gradient.
In one possible embodiment, the first preset value is greater than 5 and less than 8.
In one possible implementation manner, for a white channel pixel whose ratio of the first gradient and the second gradient satisfies a preset condition, if the first gradient is smaller than the second gradient, performing the blur correction processing, and if the first gradient is greater than the second gradient, performing the sharpening correction processing includes: for the current white channel pixel, if the maximum value of the first ratio and the second ratio is greater than the first preset value and the first gradient is not equal to the second gradient, modifying the pixel value of the current white channel pixel according to the following formula: w' 1=w1+β(w6+w8-w2-w4); wherein w' 1 is the pixel value of the modified current white channel pixel, w 1 is the pixel value of the current white channel pixel before modification, w 2 and w 4 are the pixel values of the white channel pixels adjacent to the current white channel pixel in the first diagonal direction and respectively located at both sides of the current white channel pixel, and w 6 and w 8 are the pixel values of the white channel pixels adjacent to the current white channel pixel in the second diagonal direction and respectively located at both sides of the current white channel pixel, 0< beta <1.
In one possible embodiment, 0.3< β <0.6.
In one possible embodiment, before the process of performing image fusion on the white channel correction image and the color channel image to obtain the demosaiced image, the method further includes: and for the modified current white channel pixel, if the maximum value of the third ratio and the fourth ratio is larger than the second preset value, recovering the pixel value of the current white channel pixel to be w 1, wherein the third ratio is the ratio of w '1 to w 1, and the fourth ratio is the ratio of w 1 to w' 1.
In one possible embodiment, the second preset value is greater than 1.4 and less than 1.6.
The embodiment of the application also provides an image processing device, which comprises: the image processing device comprises a processor and a memory, wherein the memory is used for storing at least one instruction, and the instruction is loaded and executed by the processor to realize the image processing method of any embodiment.
The number of processors may be one or more, for example: the processor may include an image signal processor (IMAGE SIGNAL processor, ISP). The processor and the memory may be connected by a bus or other means. The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the image processing apparatus in the embodiments of the present application. The processor executes various functional applications and data processing by running non-transitory software programs, instructions, and modules stored in memory, i.e., implementing the methods of any of the method embodiments described above. The memory may include a memory program area and a memory data area, wherein the memory program area may store an operating system, at least one application program required for a function; and necessary data, etc. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device.
The embodiment of the application also provides electronic equipment comprising the image processing device. The electronic device to which the present application relates may be any product of a mobile phone, a tablet computer, a personal computer (personal computer, PC), a Personal Digital Assistant (PDA), a smart watch, a wearable electronic device, an Augmented Reality (AR) device, a Virtual Reality (VR) device, an in-vehicle device, an unmanned aerial vehicle device, a smart car, a smart sound, a robot, a smart glasses, and the like.
The embodiment of the present application also provides a computer-readable storage medium in which a computer program is stored, which when run on a computer, causes the computer to execute the image processing method in any of the above embodiments.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk Solid STATE DISK), etc.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relation of association objects, and indicates that there may be three kinds of relations, for example, a and/or B, and may indicate that a alone exists, a and B together, and B alone exists. Wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of the following" and the like means any combination of these items, including any combination of single or plural items. For example, at least one of a, b and c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
The above is only a preferred embodiment of the present application, and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.
Claims (13)
1. An image processing method, comprising:
Acquiring an image from an image sensor, wherein the image comprises a plurality of repeating units arranged in a plurality of rows and a plurality of columns, each repeating unit comprises four pixel blocks arranged in 2 rows and 2 columns, each pixel block comprises a sub-pixel block arranged in m rows and n columns, m is more than or equal to 1, n is more than or equal to 1, each sub-pixel block comprises two white pixels arranged in a first diagonal direction and two color pixels arranged in a second diagonal direction, the color pixels in the two pixel blocks arranged in the first diagonal direction in the four pixel blocks are all green pixels, and the color pixels in the two pixel blocks arranged in the second diagonal direction in the four pixel blocks are respectively blue pixels and red pixels;
combining two color pixels in each of the sub-pixel blocks in the image into a color channel pixel, and combining two white pixels in each of the sub-pixel blocks in the image into a white channel pixel;
Splitting the image into a white channel image and a color channel image, the white channel image comprising all of the white channel pixels and the color channel image comprising all of the color channel pixels;
Correcting the white channel image to obtain a white channel corrected image so as to improve the consistency of the white channel pixels and the color channel pixels in the diagonal direction;
The correcting the white channel image includes: increasing the blur degree of at least part of the white channel pixels in the white channel image in the first diagonal direction and/or decreasing the sharpness of at least part of the white channel pixels in the white channel image in the second diagonal direction;
And carrying out image fusion on the white channel correction image and the color channel image to obtain a demosaiced image.
2. The image processing method according to claim 1, wherein,
The process of increasing the blur degree of at least part of the white channel pixels in the white channel image in the first diagonal direction and/or decreasing the sharpness of at least part of the white channel pixels in the white channel image in the second diagonal direction comprises:
Calculating a first gradient in the first diagonal direction and a second gradient in the second diagonal direction for each of the white channel pixels in the white channel image;
And for the white channel pixels with the ratio of the first gradient to the second gradient meeting the preset condition, performing blurring correction processing if the first gradient is smaller than the second gradient, and performing sharpening correction processing if the first gradient is larger than the second gradient.
3. The image processing method according to claim 2, wherein,
The first gradient is the sum of absolute values of difference values of every two adjacent white channel pixels in the first diagonal direction in the neighborhood taking the current white channel pixel as a center;
the second gradient is a sum of absolute values of differences between every two adjacent white channel pixels in the second diagonal direction in a neighborhood centered on the current white channel pixel.
4. The image processing method according to claim 3, wherein,
The neighborhood is a white channel pixel which is arranged in 5 rows and 5 columns.
5. The image processing method according to claim 3, wherein,
The ratio of the first gradient to the second gradient satisfies a preset condition:
the maximum value of the first ratio and the second ratio is larger than a first preset value, the first ratio is the ratio of the first gradient to the second gradient, and the second ratio is the ratio of the second gradient to the first gradient.
6. The image processing method according to claim 5, wherein,
The first preset value is greater than 5 and less than 8.
7. The image processing method according to claim 5, wherein,
And if the ratio of the first gradient to the second gradient meets the preset condition, performing blur correction processing, and if the first gradient is larger than the second gradient, performing sharpening correction processing, wherein the processing comprises the following steps:
For the current white channel pixel, if the maximum value of the first ratio and the second ratio is greater than a first preset value and the first gradient is not equal to the second gradient, modifying the pixel value of the current white channel pixel according to the following formula: w' 1=w1+β(w6+w8-w2-w4);
Wherein w' 1 is the pixel value of the modified current white channel pixel, w 1 is the pixel value of the current white channel pixel before modification, w 2 and w 4 are the pixel values of the white channel pixels adjacent to the current white channel pixel in the first diagonal direction and respectively located at both sides of the current white channel pixel, and w 6 and w 8 are the pixel values of the white channel pixels adjacent to the current white channel pixel in the second diagonal direction and respectively located at both sides of the current white channel pixel, 0< beta <1.
8. The image processing method according to claim 7, wherein,
0.3<β<0.6。
9. The image processing method according to claim 7, wherein,
Before the process of performing image fusion on the white channel correction image and the color channel image to obtain the demosaiced image, the method further comprises the following steps:
And for the modified current white channel pixel, if the maximum value of the third ratio and the fourth ratio is larger than the second preset value, recovering the pixel value of the current white channel pixel to be w 1, wherein the third ratio is the ratio of w '1 to w 1, and the fourth ratio is the ratio of w 1 to w' 1.
10. The image processing method according to claim 9, wherein,
The second preset value is greater than 1.4 and less than 1.6.
11. An image processing apparatus, comprising:
An image acquisition unit configured to acquire an image from an image sensor, the image including a plurality of repeating units arranged in a plurality of rows and a plurality of columns, each of the repeating units including four pixel blocks arranged in 2 rows and 2 columns, each of the pixel blocks including sub-pixel blocks arranged in m rows and n columns, m being equal to or greater than 1, n being equal to or greater than 1, each of the sub-pixel blocks including two white pixels arranged in a first diagonal direction and two color pixels arranged in a second diagonal direction, the color pixels in the two pixel blocks arranged in the first diagonal direction in the four pixel blocks being green pixels, and the color pixels in the two pixel blocks arranged in the second diagonal direction in the four pixel blocks being blue pixels and red pixels, respectively;
a merging unit, configured to merge two color pixels in each of the sub-pixel blocks in the image into a color channel pixel, and merge two white pixels in each of the sub-pixel blocks in the image into a white channel pixel;
A splitting unit configured to split the image into a white channel image and a color channel image, where the white channel image includes all the white channel pixels, and the color channel image includes all the color channel pixels;
A correction unit, configured to correct the white channel image to obtain a white channel corrected image, so as to improve consistency of white channel pixels and color channel pixels in a diagonal direction;
The correcting the white channel image includes: increasing the blur degree of at least part of the white channel pixels in the white channel image in the first diagonal direction and/or decreasing the sharpness of at least part of the white channel pixels in the white channel image in the second diagonal direction;
And the fusion unit is used for carrying out image fusion on the white channel correction image and the color channel image to obtain a demosaiced image.
12. An image processing apparatus, comprising:
a processor and a memory for storing at least one instruction which, when loaded and executed by the processor, implements the image processing method of any one of claims 1 to 10.
13. An electronic device comprising the image processing apparatus according to claim 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211013504.8A CN115442573B (en) | 2022-08-23 | 2022-08-23 | Image processing method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211013504.8A CN115442573B (en) | 2022-08-23 | 2022-08-23 | Image processing method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115442573A CN115442573A (en) | 2022-12-06 |
CN115442573B true CN115442573B (en) | 2024-05-07 |
Family
ID=84243985
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211013504.8A Active CN115442573B (en) | 2022-08-23 | 2022-08-23 | Image processing method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115442573B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016040874A (en) * | 2014-08-12 | 2016-03-24 | 株式会社東芝 | Solid state image sensor |
CN107274353A (en) * | 2017-05-17 | 2017-10-20 | 上海集成电路研发中心有限公司 | The bearing calibration of defect pixel in a kind of black white image |
CN109285125A (en) * | 2018-07-24 | 2019-01-29 | 北京交通大学 | The multi-direction total Variation Image Denoising method and apparatus of anisotropy |
CN112261391A (en) * | 2020-10-26 | 2021-01-22 | Oppo广东移动通信有限公司 | Image processing method, camera assembly and mobile terminal |
CN113676708A (en) * | 2021-07-01 | 2021-11-19 | Oppo广东移动通信有限公司 | Image generation method and device, electronic equipment and computer-readable storage medium |
CN113676675A (en) * | 2021-08-16 | 2021-11-19 | Oppo广东移动通信有限公司 | Image generation method and device, electronic equipment and computer-readable storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5935876B2 (en) * | 2012-03-27 | 2016-06-15 | ソニー株式会社 | Image processing apparatus, imaging device, image processing method, and program |
US10148926B2 (en) * | 2015-12-07 | 2018-12-04 | Samsung Electronics Co., Ltd. | Imaging apparatus and image processing method of thereof |
KR20220010285A (en) * | 2020-07-17 | 2022-01-25 | 에스케이하이닉스 주식회사 | Demosaic operation circuit, image sensing device and operation method thereof |
-
2022
- 2022-08-23 CN CN202211013504.8A patent/CN115442573B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016040874A (en) * | 2014-08-12 | 2016-03-24 | 株式会社東芝 | Solid state image sensor |
CN107274353A (en) * | 2017-05-17 | 2017-10-20 | 上海集成电路研发中心有限公司 | The bearing calibration of defect pixel in a kind of black white image |
CN109285125A (en) * | 2018-07-24 | 2019-01-29 | 北京交通大学 | The multi-direction total Variation Image Denoising method and apparatus of anisotropy |
CN112261391A (en) * | 2020-10-26 | 2021-01-22 | Oppo广东移动通信有限公司 | Image processing method, camera assembly and mobile terminal |
CN113676708A (en) * | 2021-07-01 | 2021-11-19 | Oppo广东移动通信有限公司 | Image generation method and device, electronic equipment and computer-readable storage medium |
CN113676675A (en) * | 2021-08-16 | 2021-11-19 | Oppo广东移动通信有限公司 | Image generation method and device, electronic equipment and computer-readable storage medium |
Non-Patent Citations (1)
Title |
---|
RGBX格式图像传感器的去马赛克算法;董鹏宇;;集成电路应用;20180503(05);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN115442573A (en) | 2022-12-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20190236794A1 (en) | Systems and methods for fusing images | |
WO2018225967A1 (en) | Device having cameras with different focal lengths and method of implementing cameras | |
EP2652678B1 (en) | Systems and methods for synthesizing high resolution images using super-resolution processes | |
US8411962B1 (en) | Robust image alignment using block sums | |
US9582853B1 (en) | Method and system of demosaicing bayer-type image data for image processing | |
US20170161565A1 (en) | Quasi-parametric optical flow estimation | |
EP2847998B1 (en) | Systems, methods, and computer program products for compound image demosaicing and warping | |
WO2021179590A1 (en) | Disparity map processing method and apparatus, computer device and storage medium | |
US11659294B2 (en) | Image sensor, imaging apparatus, electronic device, image processing system, and signal processing method | |
JP2014072658A (en) | Pixel interpolating device, imaging apparatus, program and integrated circuit | |
CN112767291A (en) | Visible light image and infrared image fusion method and device and readable storage medium | |
US20210185285A1 (en) | Image processing method and apparatus, electronic device, and readable storage medium | |
US20160284053A1 (en) | Edge sensing measure for raw image processing | |
US20160035099A1 (en) | Depth estimation apparatus, imaging device, and depth estimation method | |
TWI492187B (en) | Method and device for processing a super-resolution image | |
EP3633602A1 (en) | Image processing method, image processing apparatus, and program | |
US8836800B2 (en) | Image processing method and device interpolating G pixels | |
US9047665B2 (en) | Image processing apparatus | |
CN111861964B (en) | Image processing method, device and storage medium | |
US8189080B2 (en) | Orientation-based approach for forming a demosaiced image, and for color correcting and zooming the demosaiced image | |
CN112384945B (en) | Super resolution using natural hand-held motion applied to user equipment | |
CN118613821A (en) | Multi-mode demosaicing for raw image data | |
CN115442573B (en) | Image processing method and device and electronic equipment | |
JP2014042176A (en) | Image processing device and method, program and solid image pickup device | |
CN116703775A (en) | Pseudo color suppression method and device, terminal equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |