US20230274398A1 - Image processing apparatus for reducing influence of fine particle in an image, control method of same, and non-transitory computer-readable storage medium - Google Patents
Image processing apparatus for reducing influence of fine particle in an image, control method of same, and non-transitory computer-readable storage medium Download PDFInfo
- Publication number
- US20230274398A1 US20230274398A1 US18/180,298 US202318180298A US2023274398A1 US 20230274398 A1 US20230274398 A1 US 20230274398A1 US 202318180298 A US202318180298 A US 202318180298A US 2023274398 A1 US2023274398 A1 US 2023274398A1
- Authority
- US
- United States
- Prior art keywords
- image
- processing
- component
- pixel
- image data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 229
- 239000010419 fine particle Substances 0.000 title claims abstract description 63
- 238000000034 method Methods 0.000 title claims description 29
- 238000004364 calculation method Methods 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 8
- 230000000694 effects Effects 0.000 claims description 7
- 238000004501 airglow Methods 0.000 description 64
- 238000012937 correction Methods 0.000 description 32
- 238000001514 detection method Methods 0.000 description 21
- 230000005540 biological transmission Effects 0.000 description 17
- 238000010586 diagram Methods 0.000 description 11
- 238000000605 extraction Methods 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 4
- 239000000470 constituent Substances 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 4
- 238000012935 Averaging Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000007493 shaping process Methods 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 238000000149 argon plasma sintering Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G06T5/003—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/81—Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20204—Removing film grain; Adding simulated film grain
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Definitions
- the present invention relates to an image processing apparatus, control method of same and non-transitory computer-readable storage medium.
- US-2011-0188775 (referred to hereinafter as document 1) is proposed as a technique (fog/haze removal technique) for correcting an image having decreased visibility due to fog, etc.
- visibility is improved by calculating, for each target pixel, the minimum value of the R, G, and B channels within a predetermined range around the target pixel, and correcting the contrast using the minimum value image.
- US-2016-0328832 (referred to hereinafter as document 2), a histogram is calculated from an input image, and parameters for fog/haze removal processing are determined based on the likelihood and the kurtosis of the histogram.
- parameters are determined based on the histogram of an entire image. Due to this, even if a user would like to improve the visibility of a specific object (a person, for example) in an image, processing is executed so as to improve the visibility over the entire image, and the visibility of a person present at a position where fog is thick, in particular, is not improved. Furthermore, there is a possibility that, if an attempt is made to improve the visibility of a person present at a position where fog is thick, the image will be unnatural due to the fog/haze removal processing being excessively applied to a person whose visibility is already secured.
- the present invention provides a technique for removing the influence of fog and haze in an image in accordance with the conditions of the shooting scene and an object whose visibility is desired to be improved.
- an image processing apparatus that processes image data obtained by image capturing performed by an image-capturing unit
- the image processing apparatus comprising: a first setting unit configured to set a first parameter for performing processing for removing an influence of a fine particle component based on the image data; a first image processing unit configured to perform fine particle removal processing based on the first parameter set by the first setting unit; a second setting unit configured to set a second parameter differing from the first parameter; a second image processing unit configured to perform fine particle removal processing based on the second parameter set by the second setting unit; a third setting unit configured to set a region for which the first image processing unit is to be used and a region for which the second image processing unit is to be used; and a generation unit configured to generate image data from which an influence of fine particles is removed by applying a result from the first image processing unit and a result from the second image processing unit to the respective regions set by the third setting unit.
- a method of controlling an image processing apparatus that processes image data obtained by image capturing performed by an image-capturing unit, the method comprising: a first setting step of setting a first parameter for performing processing for removing an influence of a fine particle component based on the image data; a first image processing step of performing fine particle removal processing based on the first parameter set in the first setting step; a second setting step of setting a second parameter differing from the first parameter; a second image processing step of performing fine particle removal processing based on the second parameter set in the second setting step; a third setting step of setting a region for which the first image processing step is to be used and a region for which the second image processing step is to be used; and a generation step of generating image data from which an influence of fine particles is removed by applying a result from the first image processing unit and a result from the second image processing unit to the respective regions set by the setting unit.
- a non-transitory computer-readable storage medium storing a program executable by a computer to execute a method of controlling an image processing apparatus that processes image data obtained by image capturing performed by an image-capturing unit, the method comprising: a first setting step of setting a first parameter for performing processing for removing an influence of a fine particle component based on the image data; a first image processing step of performing fine particle removal processing based on the first parameter set in the first setting step; a second setting step of setting a second parameter differing from the first parameter; a second image processing step of performing fine particle removal processing based on the second parameter set in the second setting step; a third setting step of setting a region for which the first image processing step is to be used and a region for which the second image processing step is to be used; and a generation step of generating image data from which an influence of fine particles is removed by applying a result from the first image processing unit and a result from the second image processing unit to the respective regions set by the setting unit
- the influence of fog and haze in an image is removed in accordance with the conditions of the shooting scene and an object whose visibility is desired to be improved.
- FIG. 1 is a block configuration diagram of an image processing apparatus in an embodiment.
- FIG. 2 is a functional block diagram of the image processing apparatus described in the embodiment.
- FIG. 3 is a diagram illustrating an internal configuration of a fine particle removal processing unit described in the embodiment.
- FIG. 4 is a flowchart illustrating processing in the image processing apparatus according to the embodiment.
- FIG. 5 is a flowchart illustrating fine particle removal processing according to the embodiment.
- FIGS. 6 A and 6 B are schematic diagrams illustrating the process of lower-pixel image generation according to the embodiment.
- FIG. 7 is a flowchart illustrating airglow estimation processing according to the embodiment.
- FIG. 8 is a flowchart illustrating lower-pixel corrected image generation processing according to the embodiment.
- FIG. 9 is a flowchart illustrating processing for generating an RGB lower-pixel-value corrected image according to the embodiment.
- FIGS. 10 A to 10 D are schematic diagrams illustrating filter processing in RGB lower-pixel image generation processing according to the embodiment.
- FIG. 11 is a flowchart illustrating Mie scattering component generation processing according to the embodiment.
- FIG. 12 is a flowchart illustrating Rayleigh scattering component generation processing according to the embodiment.
- processing for removing the influence of a fine particle component is first performed on an input image shot under conditions in which the fine particle component is generated.
- an object such as a person is extracted from the image from which the influence of the fine particle component has been removed, by performing object detection processing such as person detection.
- object detection processing such as person detection.
- FIG. 1 is a block configuration diagram of an image processing apparatus 100 to which the present embodiment applies.
- the image processing apparatus 100 includes a CPU 101 , a RAM 102 , a ROM 103 , an HDD interface (I/F) 104 , an HDD 105 , an input I/F 106 , an output I/F 107 , and a system bus 108 .
- the CPU 101 is a processor that performs overall control of the constituent units described below.
- the RAM 102 is a memory that functions as the main memory and work area of the CPU 101 .
- the ROM 103 is a memory that stores various parameters and a program for controlling processing in the image processing apparatus 100 .
- the HDD I/F 104 is an interface conforming to the Serial ATA (SATA) standard, etc., for example, and connects the HDD 105 , which serves as a secondary storage apparatus, to the system bus 108 .
- the CPU 101 can read data from the HDD 105 and write data to the HDD 105 via the HDD I/F 104 . Furthermore, the CPU 101 can load data stored in the HDD 105 into the RAM 102 , and can similarly store data loaded in the RAM 102 to the HDD 105 . Also, the CPU 101 can execute data loaded into the RAM 102 , regarding the data as a program.
- the secondary storage apparatus may be a storage device other than a HDD, such as an optical disk drive.
- the input I/F 106 is a serial bus interface conforming to the USB standard, the IEEE1394 standard, etc., for example.
- the image processing apparatus 100 is connected to an external memory 109 and an image-capturing unit 111 via the input I/F 106 .
- the CPU 101 can obtain captured image data from the external memory 109 and the image-capturing unit 111 via the input I/F 106 .
- the output I/F 107 is a video output interface conforming to the DVI standard, the HDMI (registered trademark) standard, etc., for example.
- the image processing apparatus 100 is connected to a display unit 110 via the output I/F 107 .
- the CPU 101 can display images on the display unit 110 by outputting the images to the display unit 110 via the output I/F 107 .
- the system bus 108 is a transfer path for various types of data, and the constituent units in the image processing apparatus 100 are connected to one another via the system bus 108 .
- the external memory 109 is a storage medium such as a hard disk, a memory card, a CF card, an SD card, or a USB memory, and can store data such as image data processed by the image processing apparatus 100 .
- the display unit 110 is a display apparatus such as a display, and can display images processed by the image processing apparatus 100 , etc.
- the image-capturing unit 111 is a camera that uses an image sensor to receive an optical image of a photographic subject and outputs the obtained optical image as digital image data.
- image data whose contrast is decreased due to scattered light generated by fine particles, such as those of fog, is obtained by the image-capturing unit 111 through image capturing, and the image processing apparatus 100 generates an image in which the influence of fine particles is reduced by performing image processing described below.
- An operation unit 112 is constituted by one or more input devices such as a mouse and/or a keyboard, for example, and is used for specifying the later-described fog/haze removal range.
- FIG. 2 is a functional block diagram of the image processing apparatus described in the embodiment.
- the image processing apparatus includes an input image data obtaining unit 201 , a fine particle removal processing unit 202 , a fine particle removal image data output unit 203 , an input image data storing unit 204 , a fine particle removal image data storing unit 205 , and an object extraction processing unit 206 .
- the object extraction processing unit 206 is constituted by a known person detection technique, a known face detection technique, etc.
- the object extraction processing unit 206 performs detection of a shape of a person, a person’s face, etc., on an input image, and stores an area corresponding to the shape of a person, a person’s face, etc., in an object detection result storing unit 207 as a detection result.
- the processing units illustrated in FIG. 2 are realized by the CPU 101 loading a program stored in the ROM 103 in the RAM 102 and executing the program, some of the processing units may be realized by means of hardware.
- FIG. 3 is a diagram illustrating an internal configuration of the fine particle removal processing unit 202 in the embodiment.
- the fine particle removal processing unit 202 includes an airglow calculating unit 301 , a lower-pixel image calculating unit 302 , a lower-pixel image-based correction processing unit 304 (hereinafter as correction processing unit 304 ), and an RGB lower-pixel image-based correction processing unit 305 (hereinafter as correction processing unit 305 ).
- the fine particle removal processing unit 202 also includes a Mie scattering component calculating unit 306 and a Rayleigh scattering component calculating unit 307 for controlling scattering components, and a composing unit 308 .
- the fine particle removal processing unit 202 includes, as storage locations of data for these various types of processing, an airglow data storing unit 309 , a lower-pixel image data storing unit 310 , a lower-pixel corrected data storing unit 312 , and an RGB lower-pixel corrected data storing unit 313 .
- the fine particle removal processing unit 202 includes a Mie scattering component data storing unit 314 and a Rayleigh scattering component data storing unit 315 for controlling scattering components.
- the fine particle removal processing unit 202 includes an image processing range data storing unit 316 that determines a processing range for fine particle removal processing.
- constituent blocks are realized by the CPU 101 executing programs held in the ROM 103 , the HDD 105 , and the RAM 102 , which serve as data holding areas, as necessary, some of the constituent blocks may be realized by means of hardware.
- step S 401 the CPU 101 controls the input image data obtaining unit 201 and causes the input image data obtaining unit 201 to obtain image data obtained through image capturing by the image-capturing unit 111 , and stores the image data in the input image data storing unit 204 .
- step S 402 the CPU 101 sets a parameter for performing processing for removing a fine particle component on the input image data.
- step S 403 the CPU 101 performs processing (described in detail later) for removing the influence of fine particles, based on the parameter set in step S 402 .
- step S 404 the CPU 101 , by using known object detection processing, performs object detection on the image subjected to the fine particle removal processing in step S 403 .
- the objects to be detected are objects whose visibility the user would like to improve.
- the CPU 101 extracts a person in the image data by applying known person detection processing.
- the CPU 101 encloses, with a rectangle, the surrounding region of a person area resulting from the extraction, and stores the person area in the form of position information of the rectangle area in an object detection result storing unit 207 .
- an object detection result is a rectangle area in the present embodiment
- the shape of the object detection result is not particularly limited, and the object detection result may have a shape other than a rectangular shape. Note that, in a case in which an object detection result is not a rectangle area, for example, it suffices to determine, for each pixel, whether the pixel is a pixel in which an object was detected.
- step S 405 the CPU 101 compares the image subjected to the processing for removing the influence of fine particles and the object detection result, determines which region in the image data an object was detected in, and varies processing depending upon the result of the determination. Specifically, the CPU 101 shifts to the processing in step S 408 for pixels corresponding to a region for which it has been determined that an object was detected in the image data, and shifts to the processing in step S 406 for pixels corresponding to a region for which it has been determined that no object was detected.
- step S 406 the CPU 101 sets a parameter for the fine particle removal processing to be executed in the subsequent step S 407 so that the object detection accuracy is further increased for the region in which no object was detected.
- the parameter is set so that the fine particle removal effect is increased.
- the parameter is set so that the later-described Mie scattering intensity coefficient m and the later-described Rayleigh scattering intensity coefficient r are smaller in the parameter for the second iteration than in the parameter for the first iteration.
- setting m in the parameter for the second iteration to be smaller, setting m in the parameter for the second iteration to zero, etc. can be considered.
- step S 407 the CPU 101 performs the fine particle removal processing once again based on the parameter set in step S 406 , which is for the second iteration of the fine particle removal processing.
- the image data that is processed here is not the image data subjected to the processing in step S 403 , and is the original input image data obtained in step S 401 .
- step S 408 the CPU 101 combines the image that is the result of the processing using the first parameter, for which it has been determined in step S 405 that an object was detected, and the image that is the result of the processing using the second parameter performed in step S 407 .
- one output image is generated using, for the detection region (pixels), which is the region for which it has been determined in step S 405 that an object was detected, the image that is the result of the processing using the first parameter, and using, for the non-detection region, the image that is the result of the processing using the second parameter.
- regions are set based on the object detection result, and for each of the regions, an image subjected to fine particle component removal processing having a different effect is generated.
- step S 403 and step S 407 of FIG. 4 will be described in detail using the processing block diagrams in FIGS. 2 and 3 and the flowchart in FIG. 5 .
- step S 403 and step S 407 differ in that the parameter used for the processing differs.
- step S 501 the lower-pixel image calculating unit 302 calculates a lower-pixel image (described in detail later) from the input image data stored in the input image data storing unit 204 , and stores the lower-pixel image in the lower-pixel image data storing unit 310 .
- step S 502 the airglow calculating unit 301 calculates an airglow component (described in detail later) using the input image data stored in the input image data storing unit 204 and the lower-pixel image data stored in the lower-pixel image data storing unit 310 . Then, the airglow calculating unit 301 stores the calculated airglow data in the airglow data storing unit 309 .
- step S 503 the lower-pixel image-based correction processing unit 304 reads the airglow data stored in the airglow data storing unit 309 and the lower-pixel image data stored in the lower-pixel image data storing unit 310 . Furthermore, the correction processing unit 304 also reads image processing range data stored in the image processing range data storing unit 316 . Then, the correction processing unit 304 performs correction (described in detail later) on the input image data stored in the input image data storing unit 204 . The correction processing unit 304 stores the corrected image data in the lower-pixel corrected data storing unit 312 .
- step S 504 the RGB lower-pixel image-based correction processing unit 305 reads the airglow data stored in the airglow data storing unit 309 , the input image stored in the input image data storing unit 204 , and the image processing range data stored in the image processing range data storing unit 316 . Then, this correction processing unit 305 performs correction (described in detail later) on the input image data. The correction processing unit 305 stores the corrected image data in the RGB lower-pixel corrected data storing unit 313 .
- step S 505 the Mie scattering component calculating unit 306 reads the input image data stored in the input image data storing unit 204 and the lower-pixel corrected image data stored in the lower-pixel corrected data storing unit 312 . Then, the Mie scattering component calculating unit 306 calculates the Mie scattering component (described in detail later). The Mie scattering component calculating unit 306 stores the calculated Mie scattering component data in the Mie scattering component data storing unit 314 .
- step S 506 the Rayleigh scattering component calculating unit 307 reads the input image data stored in the input image data storing unit 204 . Furthermore, the Rayleigh scattering component calculating unit 307 also reads the lower-pixel corrected image data stored in the lower-pixel corrected data storing unit 312 and the RGB lower-pixel-value corrected image data stored in the RGB lower-pixel corrected data storing unit 313 . Then, the Rayleigh scattering component calculating unit 307 calculates the Rayleigh scattering component (described in detail later), and stores the calculated Rayleigh scattering component in the Rayleigh scattering component data storing unit 315 .
- step S 507 the composing unit 308 reads the RGB lower-pixel-value corrected image data stored in the RGB lower-pixel corrected data storing unit 313 . Furthermore, the composing unit 308 reads the Mie scattering component data stored in the Mie scattering component data storing unit 314 and the Rayleigh scattering component data stored in the Rayleigh scattering component data storing unit 315 . Subsequently, the composing unit 308 performs image composition (described in detail later), and stores the composed image data in the fine particle removal image data storing unit 205 .
- step S 403 the fine particle removal processing in step S 403 is completed.
- the airglow calculating unit 301 first converts the input image from an RGB image into a luminance image (Y image). Next, the airglow calculating unit 301 generates a histogram from the Y image obtained as a result of the conversion, sets a value corresponding to the top 1% as a threshold, and performs robust estimation processing to determine pixels for estimating the airglow from among positions of pixels having pixel values greater than or equal to the threshold. Furthermore, the airglow calculating unit 301 estimates the airglow based on the pixel values of the determined pixels.
- step S 701 the airglow calculating unit 301 reads the input image data from the input image data storing unit 204 .
- step S 702 the airglow calculating unit 301 converts the input image data read from an RGB image into a Y image.
- a conventional formula for color conversion from RGB into Y may be applied as the conversion formula.
- step S 703 the airglow calculating unit 301 , from the Y image (luminance image) obtained through the conversion in step S 702 , generates candidates (referred to hereinafter as pixel position candidates) of airglow position information for performing airglow estimation.
- the airglow calculating unit 301 calculates a histogram of the read Y image, sets a value corresponding to the top 1% from the maximum value as a threshold, and determines the positions of pixels having values greater than or equal to the threshold as reference pixel position candidates. Note that, while the top 1% is set as the threshold in the present embodiment, the embodiment is not limited to this, and a different percentage may be adopted.
- step S 704 the airglow calculating unit 301 determines reference pixel position information (referred to hereinafter as pixel positions) for actually calculating the airglow. Specifically, based on the pixel position candidates determined in step S 703 , the airglow calculating unit 301 generates airglow position information using robust estimation such as the RANSAC method, based on the candidates. This is because pixel positions corresponding to the sky portion are naturally desirable as pixel positions to be selected as airglow, and the exclusion of high luminance portions other than the sky in the image from the pixel position candidates is desired. Generally, high luminance portions other than the sky occupy a small proportion in an image, and tend to have a luminance different from the color of the sky.
- the number of pixel positions can also be limited in this process. This is for avoiding the following situation; in a case such as when there is a gradation in the color of the sky in an image, the same sky in the image includes different pixel values, and thus even a sky portion where the color changes would be subjected to estimation if too many pixels thereof are referred to.
- step S 705 in order to calculate the airglow, the airglow calculating unit 301 determines the pixel position from which the airglow component is to be extracted first from among the pixel positions determined in step S 704 . In doing so, it suffices to determine the first pixel position in the raster scan order (for example, the top-left most pixel position) from among the pixel positions determined in step S 704 as the pixel position from which the airglow component is to be extracted first.
- the first pixel position in the raster scan order for example, the top-left most pixel position
- step S 706 the airglow calculating unit 301 adds the pixel values (R, G, B) of the reference pixel position initially determined in step S 705 or determined in step S 708 color by color, and holds the results in the RAM 102 , etc.
- step S 707 the airglow calculating unit 301 determines whether or not the search has been performed for all pixel positions determined in step S 704 .
- the airglow calculating unit 301 advances the processing to step S 709 if it is determined that the search has been performed for all pixel positions, and advances the processing to step S 708 if it is determined that the search is not complete.
- step S 708 the airglow calculating unit 301 moves the pixel position determined in step S 704 to the next position. Specifically, among the pixel positions determined in step S 704 , the pixel position that is closest in the raster scan order to the pixel position that is currently being referred to is set.
- step S 709 the airglow calculating unit 301 calculates the airglow component by averaging the added pixel values added and held in the RAM 102 , etc., in step S 706 . Specifically, the airglow calculating unit 301 calculates the airglow component A RGB based on the formulas below.
- a RGB ⁇ A R / n , ⁇ A G / n , ⁇ A B / n (1)
- a Y ⁇ A R / n + ⁇ A G / n + ⁇ A B / n / 3 (2)
- a R , A G , A B , and A Y respectively indicate the airglow component values of the R channel, the G channel, the B channel, and the lower-pixel image.
- n indicates the total number of reference pixels determined in step S 704
- ⁇ indicates the sum of the values of the pixels determined in step S 704 .
- formulas (1) and (2) given here are merely examples, and a different formula may be used as a calculation formula for the airglow estimation in the embodiment.
- formula (2) may be replaced with the smallest value among ⁇ A R /n, ⁇ A G /n, and ⁇ A B /n.
- the airglow component can be estimated as described above.
- the airglow calculating unit 301 stores the estimated airglow component in the airglow data storing unit 309 .
- FIGS. 6 A and 6 B peripheral pixels centered around a given target pixel P5 in the input image are denoted as pixels P1 to P4 and pixels P6 to P9. Furthermore, the R, G, and B component values of the pixels P1 to P9 are expressed as P1(R1, G1, B1) to P9(R9, G9, B9).
- T1 is a weighted average of the three lower ranking component values excluding the lowest component value G7, as indicated in formula (3).
- T1 2 ⁇ R4 + 4 ⁇ B1 + 2 ⁇ G9 / 8 (3)
- the lower-pixel image calculating unit 302 generates the lower-pixel image by performing the above-described processing for all pixels. Furthermore, the lower-pixel image calculating unit 302 stores the generated lower-pixel image in the lower-pixel image data storing unit 310 .
- the calculation method mentioned above is an example of the calculation formula for calculating the lower-pixel image, and calculation need not be performed following execution of this calculation formula. For example, calculation may be performed by averaging four lower ranking pixels from the second-to-lowest pixel.
- FIGS. 6 A and 6 B are merely examples.
- step S 503 in FIG. 5 the corrected image generation processing (step S 503 in FIG. 5 ) based on the lower-pixel image, which is performed by the correction processing unit 304 , will be described with reference to the flowchart in FIG. 8 .
- step S 801 the correction processing unit 304 reads the lower-pixel image, the airglow data, and the input image from the airglow data storing unit 309 , the lower-pixel image data storing unit 310 , and the input image data storing unit 204 .
- step S 802 the correction processing unit 304 calculates a corrected lower-pixel image lower_A by correcting the lower-pixel image using the airglow data. Specifically, the correction processing unit 304 corrects the lower-pixel image based on the airglow data, according to formula (4) below.
- T in_ lower indicates the lower-pixel image generated in step S 501 .
- step S 803 the correction processing unit 304 generates a transmission distribution t lower (x, y) based on the corrected lower-pixel image lower_A calculated in step S 802 . Specifically, the formula below is applied to lower_A(x, y) generated in step S 802 .
- ⁇ is a coefficient for adjustment, and is “0.9” for example.
- x and y are horizontal-direction and vertical-direction coordinates in the image.
- the coefficient ⁇ is a value provided in order to prevent a value of a target pixel subjected to the fine particle removal processing from equaling zero due to transmission equaling zero in a case in which the transmission light of the pixel is consisted only of light scattered by fine particles, such as those of fog, and need not be “0.9” as mentioned above.
- step S 804 the correction processing unit 304 shapes the transmission distribution generated in step S 803 in accordance with the input image and the image processing range data, which is input from a UI.
- This shaping is performed because the transmission distribution t lower (x, y) needs to match the shapes of photographic subjects such as structures included in the image-captured data, and in order to limit the processing range to a transmission distribution range specified by means of the UI.
- the transmission distribution t(x, y) only includes information regarding approximate photographic subject shapes in the image-captured data.
- the shaping is performed because photographic subject shapes need to be accurately separated. Specifically, it suffices to use a known edge-preserving filter as that disclosed in the document “Guided Image Filtering,” Kaiming He, Jian Sun, and Xiaoou Tang, in ECCV2010 (Oral).
- step S 805 the correction processing unit 304 calculates a corrected image that is based on the lower-pixel image from A Y and the transmission distribution t lower (x, y). Specifically, this is performed based on formula (6) below.
- J lower is the corrected image based on the lower-pixel image
- I is the input image
- t 0 is a coefficient for adjustment, and is “0.1” for example.
- t 0 is a value provided in order to prevent a situation in which the value of J lower fluctuates significantly due to a slight difference from the input image I, such as shot noise during the image capturing, in a case in which t lower is an extremely small value, and need not be “0.1” as mentioned above.
- max( ⁇ ) is a function that returns the maximum value of the group of numerical values lined up inside the brackets.
- step S 806 the correction processing unit 304 stores the lower-pixel corrected image J lower calculated in step S 805 in the lower-pixel corrected data storing unit 312 .
- an image which is based on the lower-pixel image and in which the influence of the fine particle component is corrected can be created.
- step S 504 in FIG. 5 the corrected image generation processing performed by the RGB lower-pixel image-based correction processing unit 305 will be described with reference to the flowchart of FIG. 9 .
- step S 901 the correction processing unit 305 reads the input image from the input image data storing unit 204 , and reads the airglow data from the airglow data storing unit 309 .
- step S 902 the correction processing unit 305 calculates an RGB lower-pixel-value image patch_RGB_A corrected using the airglow, by performing correction (filter processing) on the input image for each of the planes R, G, and B using the airglow data.
- the correction processing unit 305 calculates the RGB lower-pixel image patch_RGB_A corrected using the airglow by correcting an RGB lower-pixel-value image using the airglow data, according to formula (7) below.
- RGB_A x, y, c T in_RGB x, y, c / A RGB (7)
- T in _RGB indicates the RGB lower-pixel-value image data before correction
- RGB_A indicates the corrected RGB lower-pixel-value image data
- x and y indicate horizontal-direction and vertical-direction coordinates in the image
- c indicates a color plane (which is either R, G, or B).
- the correction processing unit 305 calculates the RGB lower-pixel image patch_RGB_A corrected using the airglow by performing filter processing on the previously-calculated RGB_A.
- patch_RGB_A is used for all calculations as the RGB lower-pixel image corrected using the airglow.
- FIGS. 10 A to 10 D the process of performing filter processing on a given target pixel T3 is illustrated as schematic diagrams.
- FIG. 10 A illustrates RGB_A.
- FIGS. 10 B to 10 D FIG. 10 A is shown separated into the planes R, G, and B.
- T3 indicates the target pixel to be processed in the filtering
- T3 R , T3 G , and T3 B respectively indicate the component value of the target pixel T3 in the planes R, G, and B.
- 10 B to 10 D indicate four values counted from the minimum value within a range of 5 x 5 pixels from the target pixel in the planes R, G, and B, respectively, and the pixel values are ranked R4 > R3 > R2 > R1 in order of greater value.
- G1 to G4 for the G component and B1 to B4 for the B component have the same meanings.
- T3 R 2 ⁇ R2 + 4 ⁇ R3 + 2 ⁇ R4 / 8 (8)
- the result obtained by substituting G2, G3, and G4 into formula (8) above will be the result in a case in which the minimum value T3 G for the G channel is calculated.
- the difference from the lower-pixel image is that, for these values, pixel values of the same plane as the plane of the target pixel are adopted.
- the lower-pixel image differs from the RGB lower-pixel image, and calculation is performed using the pixels of all of the planes around the target pixel.
- pixels from any of the planes R, G, and B may be adopted, but in the case of the RGB lower-pixel-value, pixels are adopted from only the same plane. Due to this difference, the influence of wavelengths of scattered light can be taken into account.
- the RGB lower-pixel-value image patch_RGB_A corrected using the airglow is calculated by applying this processing to all pixels of RGB_A.
- step S 903 the correction processing unit 305 creates a transmission distribution t RGB (x, y, c) based on the RGB lower-pixel-value image corrected using the airglow, which was calculated in step S 902 .
- the following formula is applied to patch_RGB_A generated in step S 902 .
- ⁇ is a coefficient for adjustment, and is for example 0.9.
- ⁇ is a value provided in order to prevent a value of a target pixel subjected to fine particle removal processing from equaling zero due to transmission equaling zero in a case in which the transmission light of the pixel is consisted only of light scattered by fine particles, such as those of fog, and need not be 0.9 as mentioned above.
- step S 904 the correction processing unit 305 shapes the transmission distribution generated in step S 903 in accordance with the input image, and ensures that processing is not performed on portions outside the transmission distribution range specified by means of the UI.
- the specific procedures are the same as those in step S 804 , but in the case of the RGB lower-pixel-value image, the processing in step S 804 is performed for each color plane of the transmission distribution t RGB (x, y, c).
- step S 905 the correction processing unit 305 calculates a corrected image based on the RGB lower-pixel-value image from the airglow A RGB and the transmission distribution t RGB (x, y, c). Specifically, this is performed based on formula (10) below.
- J RGB is the corrected image based on the RGB lower-pixel-value image
- I is the input image.
- t 0 is a coefficient for adjustment, and is 0.1, for example.
- t 0 is a value provided in order to prevent a situation in which the value of J RGB fluctuates significantly due to a slight difference from the input image I, such as shot noise during the image capturing, in a case in which t RGB is an extremely small value, and need not be 0.1 as mentioned above.
- step S 906 the correction processing unit 305 stores J RGB calculated in step S 905 in the RGB lower-pixel corrected data storage unit 313 .
- an image obtained by correcting the RGB lower-pixel-value image can be created.
- step S 505 processing for calculating a light scattering component deriving from Mie scattering from the lower-pixel corrected image data and the input image, which is performed by the Mie scattering component calculating unit 306 , will be described with reference to the flowchart in FIG. 11 .
- step S 1101 the Mie scattering component calculating unit 306 reads the lower-pixel corrected data from the lower-pixel corrected data storing unit 312 , and reads the input image from the input image data storing unit 204 .
- step S 1102 the Mie scattering component calculating unit 306 subtracts a pixel value for each pixel in the image to extract the Mie scattering component. Specifically, the Mie scattering component calculating unit 306 calculates a Mie scattering component image according to formula (11) below.
- M(x, y, c) is the Mie scattering component image.
- the Mie scattering component can be extracted from the image by means of this calculation.
- step S 1103 the Mie scattering component calculating unit 306 stores the Mie scattering component image calculated in step S 1102 in the Mie scattering component data storing unit 314 .
- the Mie scattering component in the image can be calculated.
- step S 506 processing for calculating the Rayleigh scattering component, which is performed by the Rayleigh scattering component calculating unit 307 , will be described with reference to the flowchart in FIG. 12 .
- step S 1201 the Rayleigh scattering component calculating unit 307 reads the RGB lower-pixel-value corrected image data from the RGB lower-pixel corrected data storing unit 313 , reads the Mie scattering component image from the Mie scattering component data storing unit 314 , and reads the input image from the input image data storing unit 204 .
- step S 1202 the Rayleigh scattering component calculating unit 307 subtracts a pixel value for each pixel in the image, in order to obtain a Rayleigh scattering component image. Specifically, the calculation is performed according to formula (12) below.
- R(x, y, c) is the Rayleigh scattering component image.
- the Rayleigh scattering component can be extracted from the image by means of this calculation.
- step S 1203 the Rayleigh scattering component calculating unit 307 stores the Rayleigh scattering component image calculated in step S 1202 in the Rayleigh scattering component data storing unit 315 .
- the Rayleigh scattering component in the image can be calculated.
- step S 507 composing processing (step S 507 ) by the composing unit 308 will be described.
- the composing unit 308 calculates a composed image J out (x, y, c) according to formula 13 below.
- J out x, y, c J RGB x, y, c + m ⁇ M x, y, c + r ⁇ R x, y, c (13)
- m is a Mie scattering intensity coefficient
- r is a Rayleigh scattering intensity coefficient.
- the coefficients take a value between zero and one in each of the first and second parameters, but other values may be used as a matter of course.
- the effect of the removal processing can be changed based on object detection results, as described above. Accordingly, an image in which objects have higher visibility can be obtained.
- the above-described configurations pertaining to the removal of fine particles may be implemented in an image-capturing apparatus typified by a digital camera.
- the configurations may be implemented as an image-capturing mode to be used for performing image-capturing with a person as a photographic subject in the fog.
- processing may be performed using a pre-trained model having been subjected to machine learning, in place of such units.
- a plurality of combinations of input data and output data for the processing unit are prepared as learning data, for example.
- Knowledge is acquired from the plurality of pieces of learning data through machine learning, and a pre-trained model that outputs output data corresponding to input data as a result based on the acquired knowledge is generated.
- the pre-trained model can be configured by using a neural network model, for example.
- the pre-trained model performs the processing of the processing unit by operating in cooperation with a CPU, a GPU, etc., as a program for performing processing equivalent to that by the processing unit. Note that the above-described pre-trained model may be updated as necessary after predetermined processing is performed.
- Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s).
- computer executable instructions e.g., one or more programs
- a storage medium which may also be referred to more fully as a
- the computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions.
- the computer executable instructions may be provided to the computer, for example, from a network or the storage medium.
- the storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)TM), a flash memory device, a memory card, and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
- Studio Devices (AREA)
Abstract
This disclosure provides an image processing apparatus comprising a first setting unit which sets a first parameter for processing for removing an influence of a fine particle component based on image data; a first image processing unit which performs fine particle removal processing based on the first parameter; a second setting unit which sets a second parameter; a second image processing unit which performs fine particle removal processing based on the second parameter; a setting unit which sets a region for which the first image processing unit is to be used and a region for which the second image processing unit is to be used; and a generation unit which generates image data by applying a result from the first image processing unit and a result from the second image processing unit to the respective set regions.
Description
- The present invention relates to an image processing apparatus, control method of same and non-transitory computer-readable storage medium.
- In the field of surveillance cameras, etc., the degradation of image quality of a captured image due to a decrease in visibility caused by a fine particle component (for example, fog) present between a camera and a photographic subject is a problem. US-2011-0188775 (referred to hereinafter as document 1) is proposed as a technique (fog/haze removal technique) for correcting an image having decreased visibility due to fog, etc. In document 1, visibility is improved by calculating, for each target pixel, the minimum value of the R, G, and B channels within a predetermined range around the target pixel, and correcting the contrast using the minimum value image. Furthermore, in US-2016-0328832 (referred to hereinafter as document 2), a histogram is calculated from an input image, and parameters for fog/haze removal processing are determined based on the likelihood and the kurtosis of the histogram.
- In the technique disclosed in document 1, parameters used during image processing are uniquely held with respect to the entire image. However, there are cases in which it is better to vary parameters between a case in which the fog/haze removal technique is applied to a photographic subject at a relatively close distance and a case in which the fog/haze removal technique is applied to a photographic subject at a farther distance. Here, there is a possibility that, if processing is performed on a photographic subject at a close distance using parameters for a photographic subject at a farther distance, the image will be unnatural due to the effect of the processing being too strong for the photographic subject at a close distance.
- Furthermore, in the technique disclosed in
document 2, parameters are determined based on the histogram of an entire image. Due to this, even if a user would like to improve the visibility of a specific object (a person, for example) in an image, processing is executed so as to improve the visibility over the entire image, and the visibility of a person present at a position where fog is thick, in particular, is not improved. Furthermore, there is a possibility that, if an attempt is made to improve the visibility of a person present at a position where fog is thick, the image will be unnatural due to the fog/haze removal processing being excessively applied to a person whose visibility is already secured. - The present invention provides a technique for removing the influence of fog and haze in an image in accordance with the conditions of the shooting scene and an object whose visibility is desired to be improved.
- According to a first aspect of the invention, there is provided an image processing apparatus that processes image data obtained by image capturing performed by an image-capturing unit, the image processing apparatus comprising: a first setting unit configured to set a first parameter for performing processing for removing an influence of a fine particle component based on the image data; a first image processing unit configured to perform fine particle removal processing based on the first parameter set by the first setting unit; a second setting unit configured to set a second parameter differing from the first parameter; a second image processing unit configured to perform fine particle removal processing based on the second parameter set by the second setting unit; a third setting unit configured to set a region for which the first image processing unit is to be used and a region for which the second image processing unit is to be used; and a generation unit configured to generate image data from which an influence of fine particles is removed by applying a result from the first image processing unit and a result from the second image processing unit to the respective regions set by the third setting unit.
- According to a second aspect of the invention, there is provided a method of controlling an image processing apparatus that processes image data obtained by image capturing performed by an image-capturing unit, the method comprising: a first setting step of setting a first parameter for performing processing for removing an influence of a fine particle component based on the image data; a first image processing step of performing fine particle removal processing based on the first parameter set in the first setting step; a second setting step of setting a second parameter differing from the first parameter; a second image processing step of performing fine particle removal processing based on the second parameter set in the second setting step; a third setting step of setting a region for which the first image processing step is to be used and a region for which the second image processing step is to be used; and a generation step of generating image data from which an influence of fine particles is removed by applying a result from the first image processing unit and a result from the second image processing unit to the respective regions set by the setting unit.
- According to a third aspect of the invention, there is provided a non-transitory computer-readable storage medium storing a program executable by a computer to execute a method of controlling an image processing apparatus that processes image data obtained by image capturing performed by an image-capturing unit, the method comprising: a first setting step of setting a first parameter for performing processing for removing an influence of a fine particle component based on the image data; a first image processing step of performing fine particle removal processing based on the first parameter set in the first setting step; a second setting step of setting a second parameter differing from the first parameter; a second image processing step of performing fine particle removal processing based on the second parameter set in the second setting step; a third setting step of setting a region for which the first image processing step is to be used and a region for which the second image processing step is to be used; and a generation step of generating image data from which an influence of fine particles is removed by applying a result from the first image processing unit and a result from the second image processing unit to the respective regions set by the setting unit.
- According to the present invention, the influence of fog and haze in an image is removed in accordance with the conditions of the shooting scene and an object whose visibility is desired to be improved.
- Further features of the present invention will become apparent from the following description of an exemplary embodiment (with reference to the attached drawings).
-
FIG. 1 is a block configuration diagram of an image processing apparatus in an embodiment. -
FIG. 2 is a functional block diagram of the image processing apparatus described in the embodiment. -
FIG. 3 is a diagram illustrating an internal configuration of a fine particle removal processing unit described in the embodiment. -
FIG. 4 is a flowchart illustrating processing in the image processing apparatus according to the embodiment. -
FIG. 5 is a flowchart illustrating fine particle removal processing according to the embodiment. -
FIGS. 6A and 6B are schematic diagrams illustrating the process of lower-pixel image generation according to the embodiment. -
FIG. 7 is a flowchart illustrating airglow estimation processing according to the embodiment. -
FIG. 8 is a flowchart illustrating lower-pixel corrected image generation processing according to the embodiment. -
FIG. 9 is a flowchart illustrating processing for generating an RGB lower-pixel-value corrected image according to the embodiment. -
FIGS. 10A to 10D are schematic diagrams illustrating filter processing in RGB lower-pixel image generation processing according to the embodiment. -
FIG. 11 is a flowchart illustrating Mie scattering component generation processing according to the embodiment. -
FIG. 12 is a flowchart illustrating Rayleigh scattering component generation processing according to the embodiment. - Hereinafter, embodiments will be described in detail with reference to the attached drawings. Note, the following embodiments are not intended to limit the scope of the claimed invention. Multiple features are described in the embodiments, but limitation is not made an invention that requires all such features, and multiple such features may be combined as appropriate. Furthermore, in the attached drawings, the same reference numerals are given to the same or similar configurations, and redundant description thereof is omitted.
- In the present embodiment, processing for removing the influence of a fine particle component (referred to hereinafter as fine particles), such as fog, is first performed on an input image shot under conditions in which the fine particle component is generated. Next, an object such as a person is extracted from the image from which the influence of the fine particle component has been removed, by performing object detection processing such as person detection. By changing the ratio between the Mie scattering component and the Rayleigh scattering component in the input image based on this extraction result when generating a fine particle removal image, images in which the appearance of the influence of the fine particle component is varied are created. An image in which the object has higher visibility is obtained by combining these images.
-
FIG. 1 is a block configuration diagram of animage processing apparatus 100 to which the present embodiment applies. Theimage processing apparatus 100 according to the present embodiment includes aCPU 101, aRAM 102, aROM 103, an HDD interface (I/F) 104, anHDD 105, an input I/F 106, an output I/F 107, and asystem bus 108. TheCPU 101 is a processor that performs overall control of the constituent units described below. TheRAM 102 is a memory that functions as the main memory and work area of theCPU 101. TheROM 103 is a memory that stores various parameters and a program for controlling processing in theimage processing apparatus 100. - The HDD I/F 104 is an interface conforming to the Serial ATA (SATA) standard, etc., for example, and connects the
HDD 105, which serves as a secondary storage apparatus, to thesystem bus 108. TheCPU 101 can read data from theHDD 105 and write data to theHDD 105 via the HDD I/F 104. Furthermore, theCPU 101 can load data stored in theHDD 105 into theRAM 102, and can similarly store data loaded in theRAM 102 to theHDD 105. Also, theCPU 101 can execute data loaded into theRAM 102, regarding the data as a program. Note that the secondary storage apparatus may be a storage device other than a HDD, such as an optical disk drive. The input I/F 106 is a serial bus interface conforming to the USB standard, the IEEE1394 standard, etc., for example. - The
image processing apparatus 100 is connected to anexternal memory 109 and an image-capturingunit 111 via the input I/F 106. TheCPU 101 can obtain captured image data from theexternal memory 109 and the image-capturingunit 111 via the input I/F 106. The output I/F 107 is a video output interface conforming to the DVI standard, the HDMI (registered trademark) standard, etc., for example. Theimage processing apparatus 100 is connected to adisplay unit 110 via the output I/F 107. TheCPU 101 can display images on thedisplay unit 110 by outputting the images to thedisplay unit 110 via the output I/F 107. - The
system bus 108 is a transfer path for various types of data, and the constituent units in theimage processing apparatus 100 are connected to one another via thesystem bus 108. - The
external memory 109 is a storage medium such as a hard disk, a memory card, a CF card, an SD card, or a USB memory, and can store data such as image data processed by theimage processing apparatus 100. - The
display unit 110 is a display apparatus such as a display, and can display images processed by theimage processing apparatus 100, etc. - The image-capturing
unit 111 is a camera that uses an image sensor to receive an optical image of a photographic subject and outputs the obtained optical image as digital image data. In theimage processing apparatus 100 according to the present embodiment, image data whose contrast is decreased due to scattered light generated by fine particles, such as those of fog, is obtained by the image-capturingunit 111 through image capturing, and theimage processing apparatus 100 generates an image in which the influence of fine particles is reduced by performing image processing described below. - An
operation unit 112 is constituted by one or more input devices such as a mouse and/or a keyboard, for example, and is used for specifying the later-described fog/haze removal range. -
FIG. 2 is a functional block diagram of the image processing apparatus described in the embodiment. As illustrated inFIG. 2 , the image processing apparatus according to the embodiment includes an input imagedata obtaining unit 201, a fine particleremoval processing unit 202, a fine particle removal imagedata output unit 203, an input imagedata storing unit 204, a fine particle removal imagedata storing unit 205, and an objectextraction processing unit 206. The objectextraction processing unit 206 is constituted by a known person detection technique, a known face detection technique, etc. The objectextraction processing unit 206 performs detection of a shape of a person, a person’s face, etc., on an input image, and stores an area corresponding to the shape of a person, a person’s face, etc., in an object detectionresult storing unit 207 as a detection result. Note that, while the processing units illustrated inFIG. 2 are realized by theCPU 101 loading a program stored in theROM 103 in theRAM 102 and executing the program, some of the processing units may be realized by means of hardware. -
FIG. 3 is a diagram illustrating an internal configuration of the fine particleremoval processing unit 202 in the embodiment. The fine particleremoval processing unit 202 includes anairglow calculating unit 301, a lower-pixelimage calculating unit 302, a lower-pixel image-based correction processing unit 304 (hereinafter as correction processing unit 304), and an RGB lower-pixel image-based correction processing unit 305 (hereinafter as correction processing unit 305). The fine particleremoval processing unit 202 also includes a Mie scatteringcomponent calculating unit 306 and a Rayleigh scatteringcomponent calculating unit 307 for controlling scattering components, and acomposing unit 308. Furthermore, the fine particleremoval processing unit 202 includes, as storage locations of data for these various types of processing, an airglowdata storing unit 309, a lower-pixel imagedata storing unit 310, a lower-pixel correcteddata storing unit 312, and an RGB lower-pixel correcteddata storing unit 313. The fine particleremoval processing unit 202 includes a Mie scattering componentdata storing unit 314 and a Rayleigh scattering componentdata storing unit 315 for controlling scattering components. Furthermore, the fine particleremoval processing unit 202 includes an image processing rangedata storing unit 316 that determines a processing range for fine particle removal processing. - While these constituent blocks are realized by the
CPU 101 executing programs held in theROM 103, theHDD 105, and theRAM 102, which serve as data holding areas, as necessary, some of the constituent blocks may be realized by means of hardware. - The flow of processing in the
image processing apparatus 100 according to the embodiment will be described using the block diagram inFIG. 2 and the flowchart inFIG. 4 . - In step S401, the
CPU 101 controls the input imagedata obtaining unit 201 and causes the input imagedata obtaining unit 201 to obtain image data obtained through image capturing by the image-capturingunit 111, and stores the image data in the input imagedata storing unit 204. - In step S402, the
CPU 101 sets a parameter for performing processing for removing a fine particle component on the input image data. - In step S403, the
CPU 101 performs processing (described in detail later) for removing the influence of fine particles, based on the parameter set in step S402. - In step S404, the
CPU 101, by using known object detection processing, performs object detection on the image subjected to the fine particle removal processing in step S403. Here, the objects to be detected are objects whose visibility the user would like to improve. For example, if the user would like to improve the visibility of a person, theCPU 101 extracts a person in the image data by applying known person detection processing. TheCPU 101 encloses, with a rectangle, the surrounding region of a person area resulting from the extraction, and stores the person area in the form of position information of the rectangle area in an object detectionresult storing unit 207. Note that, while an object detection result is a rectangle area in the present embodiment, the shape of the object detection result is not particularly limited, and the object detection result may have a shape other than a rectangular shape. Note that, in a case in which an object detection result is not a rectangle area, for example, it suffices to determine, for each pixel, whether the pixel is a pixel in which an object was detected. - In step S405, the
CPU 101 compares the image subjected to the processing for removing the influence of fine particles and the object detection result, determines which region in the image data an object was detected in, and varies processing depending upon the result of the determination. Specifically, theCPU 101 shifts to the processing in step S408 for pixels corresponding to a region for which it has been determined that an object was detected in the image data, and shifts to the processing in step S406 for pixels corresponding to a region for which it has been determined that no object was detected. - In step S406, the
CPU 101 sets a parameter for the fine particle removal processing to be executed in the subsequent step S407 so that the object detection accuracy is further increased for the region in which no object was detected. Specifically, the parameter is set so that the fine particle removal effect is increased. For example, the parameter is set so that the later-described Mie scattering intensity coefficient m and the later-described Rayleigh scattering intensity coefficient r are smaller in the parameter for the second iteration than in the parameter for the first iteration. For m in particular, setting m in the parameter for the second iteration to be smaller, setting m in the parameter for the second iteration to zero, etc., can be considered. - In step S407, the
CPU 101 performs the fine particle removal processing once again based on the parameter set in step S406, which is for the second iteration of the fine particle removal processing. The image data that is processed here is not the image data subjected to the processing in step S403, and is the original input image data obtained in step S401. - Then, in step S408, the
CPU 101 combines the image that is the result of the processing using the first parameter, for which it has been determined in step S405 that an object was detected, and the image that is the result of the processing using the second parameter performed in step S407. Specifically, one output image is generated using, for the detection region (pixels), which is the region for which it has been determined in step S405 that an object was detected, the image that is the result of the processing using the first parameter, and using, for the non-detection region, the image that is the result of the processing using the second parameter. - In such a manner, regions are set based on the object detection result, and for each of the regions, an image subjected to fine particle component removal processing having a different effect is generated.
- Here, the fine particle removal processing in steps S403 and S407 of
FIG. 4 will be described in detail using the processing block diagrams inFIGS. 2 and 3 and the flowchart inFIG. 5 . Note that step S403 and step S407 differ in that the parameter used for the processing differs. - In step S501, the lower-pixel
image calculating unit 302 calculates a lower-pixel image (described in detail later) from the input image data stored in the input imagedata storing unit 204, and stores the lower-pixel image in the lower-pixel imagedata storing unit 310. - In step S502, the
airglow calculating unit 301 calculates an airglow component (described in detail later) using the input image data stored in the input imagedata storing unit 204 and the lower-pixel image data stored in the lower-pixel imagedata storing unit 310. Then, theairglow calculating unit 301 stores the calculated airglow data in the airglowdata storing unit 309. - In step S503, the lower-pixel image-based
correction processing unit 304 reads the airglow data stored in the airglowdata storing unit 309 and the lower-pixel image data stored in the lower-pixel imagedata storing unit 310. Furthermore, thecorrection processing unit 304 also reads image processing range data stored in the image processing rangedata storing unit 316. Then, thecorrection processing unit 304 performs correction (described in detail later) on the input image data stored in the input imagedata storing unit 204. Thecorrection processing unit 304 stores the corrected image data in the lower-pixel correcteddata storing unit 312. - In step S504, the RGB lower-pixel image-based
correction processing unit 305 reads the airglow data stored in the airglowdata storing unit 309, the input image stored in the input imagedata storing unit 204, and the image processing range data stored in the image processing rangedata storing unit 316. Then, thiscorrection processing unit 305 performs correction (described in detail later) on the input image data. Thecorrection processing unit 305 stores the corrected image data in the RGB lower-pixel correcteddata storing unit 313. - In step S505, the Mie scattering
component calculating unit 306 reads the input image data stored in the input imagedata storing unit 204 and the lower-pixel corrected image data stored in the lower-pixel correcteddata storing unit 312. Then, the Mie scatteringcomponent calculating unit 306 calculates the Mie scattering component (described in detail later). The Mie scatteringcomponent calculating unit 306 stores the calculated Mie scattering component data in the Mie scattering componentdata storing unit 314. - In step S506, the Rayleigh scattering
component calculating unit 307 reads the input image data stored in the input imagedata storing unit 204. Furthermore, the Rayleigh scatteringcomponent calculating unit 307 also reads the lower-pixel corrected image data stored in the lower-pixel correcteddata storing unit 312 and the RGB lower-pixel-value corrected image data stored in the RGB lower-pixel correcteddata storing unit 313. Then, the Rayleigh scatteringcomponent calculating unit 307 calculates the Rayleigh scattering component (described in detail later), and stores the calculated Rayleigh scattering component in the Rayleigh scattering componentdata storing unit 315. - In step S507, the composing
unit 308 reads the RGB lower-pixel-value corrected image data stored in the RGB lower-pixel correcteddata storing unit 313. Furthermore, the composingunit 308 reads the Mie scattering component data stored in the Mie scattering componentdata storing unit 314 and the Rayleigh scattering component data stored in the Rayleigh scattering componentdata storing unit 315. Subsequently, the composingunit 308 performs image composition (described in detail later), and stores the composed image data in the fine particle removal imagedata storing unit 205. - As a result of the processing described above, the fine particle removal processing in step S403 is completed.
- Next, the airglow calculation processing in step S502 will be described. The
airglow calculating unit 301 first converts the input image from an RGB image into a luminance image (Y image). Next, theairglow calculating unit 301 generates a histogram from the Y image obtained as a result of the conversion, sets a value corresponding to the top 1% as a threshold, and performs robust estimation processing to determine pixels for estimating the airglow from among positions of pixels having pixel values greater than or equal to the threshold. Furthermore, theairglow calculating unit 301 estimates the airglow based on the pixel values of the determined pixels. - In the following, the details of the
airglow calculating unit 301 in the embodiment will be described with reference to the flowchart inFIG. 7 . - In step S701, the
airglow calculating unit 301 reads the input image data from the input imagedata storing unit 204. - In step S702, the
airglow calculating unit 301 converts the input image data read from an RGB image into a Y image. Here, a conventional formula for color conversion from RGB into Y may be applied as the conversion formula. - In step S703, the
airglow calculating unit 301, from the Y image (luminance image) obtained through the conversion in step S702, generates candidates (referred to hereinafter as pixel position candidates) of airglow position information for performing airglow estimation. Specifically, theairglow calculating unit 301 calculates a histogram of the read Y image, sets a value corresponding to the top 1% from the maximum value as a threshold, and determines the positions of pixels having values greater than or equal to the threshold as reference pixel position candidates. Note that, while the top 1% is set as the threshold in the present embodiment, the embodiment is not limited to this, and a different percentage may be adopted. - In step S704, the
airglow calculating unit 301 determines reference pixel position information (referred to hereinafter as pixel positions) for actually calculating the airglow. Specifically, based on the pixel position candidates determined in step S703, theairglow calculating unit 301 generates airglow position information using robust estimation such as the RANSAC method, based on the candidates. This is because pixel positions corresponding to the sky portion are naturally desirable as pixel positions to be selected as airglow, and the exclusion of high luminance portions other than the sky in the image from the pixel position candidates is desired. Generally, high luminance portions other than the sky occupy a small proportion in an image, and tend to have a luminance different from the color of the sky. Thus, robust estimation is performed in which high luminance portions other than the sky can be treated as outliers and excluded. Furthermore, the number of pixel positions can also be limited in this process. This is for avoiding the following situation; in a case such as when there is a gradation in the color of the sky in an image, the same sky in the image includes different pixel values, and thus even a sky portion where the color changes would be subjected to estimation if too many pixels thereof are referred to. - In step S705, in order to calculate the airglow, the
airglow calculating unit 301 determines the pixel position from which the airglow component is to be extracted first from among the pixel positions determined in step S704. In doing so, it suffices to determine the first pixel position in the raster scan order (for example, the top-left most pixel position) from among the pixel positions determined in step S704 as the pixel position from which the airglow component is to be extracted first. - In step S706, the
airglow calculating unit 301 adds the pixel values (R, G, B) of the reference pixel position initially determined in step S705 or determined in step S708 color by color, and holds the results in theRAM 102, etc. - In step S707, the
airglow calculating unit 301 determines whether or not the search has been performed for all pixel positions determined in step S704. Theairglow calculating unit 301 advances the processing to step S709 if it is determined that the search has been performed for all pixel positions, and advances the processing to step S708 if it is determined that the search is not complete. - In step S708, the
airglow calculating unit 301 moves the pixel position determined in step S704 to the next position. Specifically, among the pixel positions determined in step S704, the pixel position that is closest in the raster scan order to the pixel position that is currently being referred to is set. - In step S709, the
airglow calculating unit 301 calculates the airglow component by averaging the added pixel values added and held in theRAM 102, etc., in step S706. Specifically, theairglow calculating unit 301 calculates the airglow component ARGB based on the formulas below. -
-
- Here, AR, AG, AB, and AY respectively indicate the airglow component values of the R channel, the G channel, the B channel, and the lower-pixel image. Furthermore, n indicates the total number of reference pixels determined in step S704, and Σ indicates the sum of the values of the pixels determined in step S704. Note that formulas (1) and (2) given here are merely examples, and a different formula may be used as a calculation formula for the airglow estimation in the embodiment. For example, formula (2) may be replaced with the smallest value among ΣAR/n, ΣAG/n, and ΣAB/n.
- The airglow component can be estimated as described above. The
airglow calculating unit 301 stores the estimated airglow component in the airglowdata storing unit 309. - Next, the lower-pixel image generation processing performed by the lower-pixel
image calculating unit 302 in step S501 will be described usingFIGS. 6A and 6B . As illustrated inFIG. 6A , peripheral pixels centered around a given target pixel P5 in the input image are denoted as pixels P1 to P4 and pixels P6 to P9. Furthermore, the R, G, and B component values of the pixels P1 to P9 are expressed as P1(R1, G1, B1) to P9(R9, G9, B9). - Furthermore, suppose that these component values are ranked in the order of R5 > B3 > R2 > ... > R4 > B1 > G9 > G7. Here, when a lower-pixel of the target pixel P5 is defined as T1 as illustrated in
FIGS. 6B , T1 is a weighted average of the three lower ranking component values excluding the lowest component value G7, as indicated in formula (3). By adopting a weighted average rather than the minimum value, a situation in which the lower-pixel image is highly influenced by sensor noise can be prevented. That is, a situation in which a pixel that is highly influenced by sensor noise is generated in the processed image can be suppressed compared to the case in which the minimum value is adopted. -
- The lower-pixel
image calculating unit 302 generates the lower-pixel image by performing the above-described processing for all pixels. Furthermore, the lower-pixelimage calculating unit 302 stores the generated lower-pixel image in the lower-pixel imagedata storing unit 310. Note that the calculation method mentioned above is an example of the calculation formula for calculating the lower-pixel image, and calculation need not be performed following execution of this calculation formula. For example, calculation may be performed by averaging four lower ranking pixels from the second-to-lowest pixel. Furthermore, while a lower-pixel is generated by referring to peripheral pixels located at a distance of one pixel from the target pixel in the present embodiment, a lower-pixel may of course be calculated by referring to peripheral pixels located at a distance of two pixels, or reference may be made to peripheral pixels at a farther distance. It should be understood thatFIGS. 6A and 6B are merely examples. - Next, the corrected image generation processing (step S503 in
FIG. 5 ) based on the lower-pixel image, which is performed by thecorrection processing unit 304, will be described with reference to the flowchart inFIG. 8 . - In step S801, the
correction processing unit 304 reads the lower-pixel image, the airglow data, and the input image from the airglowdata storing unit 309, the lower-pixel imagedata storing unit 310, and the input imagedata storing unit 204. - In step S802, the
correction processing unit 304 calculates a corrected lower-pixel image lower_A by correcting the lower-pixel image using the airglow data. Specifically, thecorrection processing unit 304 corrects the lower-pixel image based on the airglow data, according to formula (4) below. -
- Here, Tin_ lower indicates the lower-pixel image generated in step S501.
- In step S803, the
correction processing unit 304 generates a transmission distribution tlower(x, y) based on the corrected lower-pixel image lower_A calculated in step S802. Specifically, the formula below is applied to lower_A(x, y) generated in step S802. -
- Here, ω is a coefficient for adjustment, and is “0.9” for example. Furthermore, x and y are horizontal-direction and vertical-direction coordinates in the image. The coefficient ω is a value provided in order to prevent a value of a target pixel subjected to the fine particle removal processing from equaling zero due to transmission equaling zero in a case in which the transmission light of the pixel is consisted only of light scattered by fine particles, such as those of fog, and need not be “0.9” as mentioned above.
- In step S804, the
correction processing unit 304 shapes the transmission distribution generated in step S803 in accordance with the input image and the image processing range data, which is input from a UI. This shaping is performed because the transmission distribution tlower(x, y) needs to match the shapes of photographic subjects such as structures included in the image-captured data, and in order to limit the processing range to a transmission distribution range specified by means of the UI. In the processing up to step S803, the transmission distribution t(x, y) only includes information regarding approximate photographic subject shapes in the image-captured data. Thus, the shaping is performed because photographic subject shapes need to be accurately separated. Specifically, it suffices to use a known edge-preserving filter as that disclosed in the document “Guided Image Filtering,” Kaiming He, Jian Sun, and Xiaoou Tang, in ECCV2010 (Oral). - Next, measures are taken so that the processing for removing the influence of fine particles is not performed on pixel portions outside the transmission distribution range specified using the UI. For values exceeding t_th_max and values falling below t_th_min that have been specified by means of the UI, tlower(x, y) = 1 is substituted into the transmission distribution tlower(x, y).
- Note that, while the values set by means of the UI are applied after the filter processing in the present embodiment, the values set by means of the UI first may of course be applied.
- In step S805, the
correction processing unit 304 calculates a corrected image that is based on the lower-pixel image from AY and the transmission distribution tlower(x, y). Specifically, this is performed based on formula (6) below. -
- Here, Jlower is the corrected image based on the lower-pixel image, I is the input image, and t0 is a coefficient for adjustment, and is “0.1” for example. Here, t0 is a value provided in order to prevent a situation in which the value of Jlower fluctuates significantly due to a slight difference from the input image I, such as shot noise during the image capturing, in a case in which tlower is an extremely small value, and need not be “0.1” as mentioned above. Further, max(···) is a function that returns the maximum value of the group of numerical values lined up inside the brackets.
- In step S806, the
correction processing unit 304 stores the lower-pixel corrected image Jlower calculated in step S805 in the lower-pixel correcteddata storing unit 312. - By performing the above-described processing, an image which is based on the lower-pixel image and in which the influence of the fine particle component is corrected can be created.
- Next, the corrected image generation processing (step S504 in
FIG. 5 ) performed by the RGB lower-pixel image-basedcorrection processing unit 305 will be described with reference to the flowchart ofFIG. 9 . - In step S901, the
correction processing unit 305 reads the input image from the input imagedata storing unit 204, and reads the airglow data from the airglowdata storing unit 309. - In step S902, the
correction processing unit 305 calculates an RGB lower-pixel-value image patch_RGB_A corrected using the airglow, by performing correction (filter processing) on the input image for each of the planes R, G, and B using the airglow data. - First, the
correction processing unit 305 calculates the RGB lower-pixel image patch_RGB_A corrected using the airglow by correcting an RGB lower-pixel-value image using the airglow data, according to formula (7) below. -
- Here, Tin _RGB indicates the RGB lower-pixel-value image data before correction, and RGB_A indicates the corrected RGB lower-pixel-value image data. Further, x and y indicate horizontal-direction and vertical-direction coordinates in the image, and c indicates a color plane (which is either R, G, or B).
- Next, the
correction processing unit 305 calculates the RGB lower-pixel image patch_RGB_A corrected using the airglow by performing filter processing on the previously-calculated RGB_A. In the processing following step S903, patch_RGB_A is used for all calculations as the RGB lower-pixel image corrected using the airglow. - Here, details of the filter processing in the present processing will be described with reference to
FIGS. 10A to 10D . - In
FIGS. 10A to 10D , the process of performing filter processing on a given target pixel T3 is illustrated as schematic diagrams.FIG. 10A illustrates RGB_A. InFIGS. 10B to 10D ,FIG. 10A is shown separated into the planes R, G, and B. T3 indicates the target pixel to be processed in the filtering, and T3R, T3G, and T3B respectively indicate the component value of the target pixel T3 in the planes R, G, and B. Furthermore, R1 to R4, G1 to G4, and B1 to B4 inFIGS. 10B to 10D indicate four values counted from the minimum value within a range of 5 x 5 pixels from the target pixel in the planes R, G, and B, respectively, and the pixel values are ranked R4 > R3 > R2 > R1 in order of greater value. G1 to G4 for the G component and B1 to B4 for the B component have the same meanings. - In a case in which filter processing is performed on the corrected RGB lower-pixel-value image data RGB_A, the lower-pixels of the planes R, G, and B within the range of 5 × 5 pixels with the target pixel T3 at the center differ for each color, as illustrated in
FIGS. 10B to 10D . Due to this, a lower-pixel filter processing result T3R for the R channel as illustrated inFIG. 10B is calculated according to formula (8) below. -
- Similarly, the result obtained by substituting G2, G3, and G4 into formula (8) above will be the result in a case in which the minimum value T3G for the G channel is calculated. This similarly applies to the minimum value T3B for the B channel as well. The difference from the lower-pixel image is that, for these values, pixel values of the same plane as the plane of the target pixel are adopted. The lower-pixel image differs from the RGB lower-pixel image, and calculation is performed using the pixels of all of the planes around the target pixel. Thus, pixels from any of the planes R, G, and B may be adopted, but in the case of the RGB lower-pixel-value, pixels are adopted from only the same plane. Due to this difference, the influence of wavelengths of scattered light can be taken into account.
- Following this, the RGB lower-pixel-value image patch_RGB_A corrected using the airglow is calculated by applying this processing to all pixels of RGB_A.
- In step S903, the
correction processing unit 305 creates a transmission distribution tRGB(x, y, c) based on the RGB lower-pixel-value image corrected using the airglow, which was calculated in step S902. The following formula is applied to patch_RGB_A generated in step S902. -
- Here, ω is a coefficient for adjustment, and is for example 0.9. ω is a value provided in order to prevent a value of a target pixel subjected to fine particle removal processing from equaling zero due to transmission equaling zero in a case in which the transmission light of the pixel is consisted only of light scattered by fine particles, such as those of fog, and need not be 0.9 as mentioned above.
- In step S904, the
correction processing unit 305 shapes the transmission distribution generated in step S903 in accordance with the input image, and ensures that processing is not performed on portions outside the transmission distribution range specified by means of the UI. The specific procedures are the same as those in step S804, but in the case of the RGB lower-pixel-value image, the processing in step S804 is performed for each color plane of the transmission distribution tRGB(x, y, c). - In step S905, the
correction processing unit 305 calculates a corrected image based on the RGB lower-pixel-value image from the airglow ARGB and the transmission distribution tRGB(x, y, c). Specifically, this is performed based on formula (10) below. -
- Here, JRGB is the corrected image based on the RGB lower-pixel-value image, and I is the input image. Furthermore, t0 is a coefficient for adjustment, and is 0.1, for example. Here, t0 is a value provided in order to prevent a situation in which the value of JRGB fluctuates significantly due to a slight difference from the input image I, such as shot noise during the image capturing, in a case in which tRGB is an extremely small value, and need not be 0.1 as mentioned above.
- In step S906, the
correction processing unit 305 stores JRGB calculated in step S905 in the RGB lower-pixel correcteddata storage unit 313. - By performing the above-described processing, an image obtained by correcting the RGB lower-pixel-value image can be created.
- Next, processing (step S505) for calculating a light scattering component deriving from Mie scattering from the lower-pixel corrected image data and the input image, which is performed by the Mie scattering
component calculating unit 306, will be described with reference to the flowchart inFIG. 11 . - In step S1101, the Mie scattering
component calculating unit 306 reads the lower-pixel corrected data from the lower-pixel correcteddata storing unit 312, and reads the input image from the input imagedata storing unit 204. - In step S1102, the Mie scattering
component calculating unit 306 subtracts a pixel value for each pixel in the image to extract the Mie scattering component. Specifically, the Mie scatteringcomponent calculating unit 306 calculates a Mie scattering component image according to formula (11) below. -
- Here, M(x, y, c) is the Mie scattering component image. The Mie scattering component can be extracted from the image by means of this calculation.
- In step S1103, the Mie scattering
component calculating unit 306 stores the Mie scattering component image calculated in step S1102 in the Mie scattering componentdata storing unit 314. - By performing processing as described above, the Mie scattering component in the image can be calculated.
- Next, processing (step S506) for calculating the Rayleigh scattering component, which is performed by the Rayleigh scattering
component calculating unit 307, will be described with reference to the flowchart inFIG. 12 . - In step S1201, the Rayleigh scattering
component calculating unit 307 reads the RGB lower-pixel-value corrected image data from the RGB lower-pixel correcteddata storing unit 313, reads the Mie scattering component image from the Mie scattering componentdata storing unit 314, and reads the input image from the input imagedata storing unit 204. - In step S1202, the Rayleigh scattering
component calculating unit 307 subtracts a pixel value for each pixel in the image, in order to obtain a Rayleigh scattering component image. Specifically, the calculation is performed according to formula (12) below. -
- Here, R(x, y, c) is the Rayleigh scattering component image. The Rayleigh scattering component can be extracted from the image by means of this calculation.
- In step S1203, the Rayleigh scattering
component calculating unit 307 stores the Rayleigh scattering component image calculated in step S1202 in the Rayleigh scattering componentdata storing unit 315. - By performing processing as described above, the Rayleigh scattering component in the image can be calculated.
- Next, composing processing (step S507) by the composing
unit 308 will be described. - The composing
unit 308 calculates a composed image Jout(x, y, c) according to formula 13 below. -
- Here, m is a Mie scattering intensity coefficient, and r is a Rayleigh scattering intensity coefficient. In the present embodiment, it is desirable that the coefficients take a value between zero and one in each of the first and second parameters, but other values may be used as a matter of course.
- According to the present invention, when processing for removing the influence of a fine particle component in an image is performed, the effect of the removal processing can be changed based on object detection results, as described above. Accordingly, an image in which objects have higher visibility can be obtained.
- For example, if a person is set as a target to be detected (object), an image in which the influence of the fine particle component has been removed and which is specific to the person can be obtained.
- Note that the above-described configurations pertaining to the removal of fine particles may be implemented in an image-capturing apparatus typified by a digital camera. For example, the configurations may be implemented as an image-capturing mode to be used for performing image-capturing with a person as a photographic subject in the fog. In this case, it suffices to incorporate the image-capturing
unit 111 inFIG. 1 as a part of the configuration of theimage processing apparatus 100. - Note that, with regard to the object
extraction processing unit 206, the fine particle removal processing unit, etc., among the above-described processing units, processing may be performed using a pre-trained model having been subjected to machine learning, in place of such units. In that case, a plurality of combinations of input data and output data for the processing unit are prepared as learning data, for example. Knowledge is acquired from the plurality of pieces of learning data through machine learning, and a pre-trained model that outputs output data corresponding to input data as a result based on the acquired knowledge is generated. The pre-trained model can be configured by using a neural network model, for example. Furthermore, the pre-trained model performs the processing of the processing unit by operating in cooperation with a CPU, a GPU, etc., as a program for performing processing equivalent to that by the processing unit. Note that the above-described pre-trained model may be updated as necessary after predetermined processing is performed. - Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
- While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
- This application claims the benefit of Japanese Patent Application No. 2019-084427, filed Apr. 25, 2019 which is hereby incorporated by reference herein in its entirety.
Claims (9)
1. An image processing apparatus that processes image data obtained by image capturing, the image processing apparatus comprising:
a processor; and
a memory storing a program which, read and executed by the processor, causes the processor to function as:
a detecting unit configured to detect an object in an image corresponding to the image data based on the image data; and
an image processing unit configured to perform a first processing for removing component corresponding scattered light caused by fine particles existing in air to a first area where the object has been detected in the image, and to perform a second processing for removing the component to a second area where the object has not been detected in the image, the second processing having a different effect for removing the component from the first processing.
2. The apparatus according to claim 1 ,
wherein the second processing removes the component to a further extent compared to the first processing.
3. The apparatus according to claim 2 , further comprising a generation unit configured to generate a composed image by selecting image data obtained by the first image processing to the first area and by selecting image data obtained by the second image processing to the second area.
4. The apparatus according to claim 1 , further comprising a image-capturing unit configured to capture the image data obtained by the image capturing.
5. The apparatus according to claim 1 ,
wherein the first image processing and the second image processing perform calculation of a Mie scattering component and calculation of a Rayleigh scattering component, and perform the first processing and the second processing by generating a composed image in which the calculated Mie scattering component and the Rayleigh scattering component are used.
6. The apparatus according to claim 5 ,
wherein the first processing uses a first parameter and the second processing uses a second parameter, the first parameter and the second parameter including a Mie scattering intensity coefficient and a Rayleigh scattering intensity coefficient, the Mie scattering intensity coefficient and the Rayleigh scattering intensity coefficient respectively indicating a contribution of the Mie scattering component and a contribution of the Rayleigh scattering component in the generation of the composed image, and
the Mie scattering intensity coefficient and the Rayleigh scattering intensity coefficient have smaller values in the second parameter than in the first parameter.
7. The apparatus according to claim 1 ,
wherein the first area is specified by a user instruction via a user interface.
8. A method of controlling an image processing apparatus that processes image data obtained by image capturing, the method comprising:
detecting an object in an image corresponding to the image data based on the image data;
performing a first processing for removing component corresponding scattered light caused by fine particles existing in air to a first area where the object has been detected in the image; and
performing a second processing for removing the component to a second area where the object has not been detected in the image, the second processing having a different effect for removing the component from the first processing.
9. A non-transitory computer-readable storage medium storing instructions which, when read and executed by a computer, causes the computer to perform the steps of a method of controlling an image processing apparatus that processes image data obtained by image capturing, the method comprising:
detecting an object in an image corresponding to the image data based on the image data;
performing a first processing for removing component corresponding scattered light caused by fine particles existing in air to a first area where the object has been detected in the image; and
performing a second processing for removing the component to a second area where the object has not been detected in the image, the second processing having a different effect for removing the component from the first processing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/180,298 US20230274398A1 (en) | 2019-04-25 | 2023-03-08 | Image processing apparatus for reducing influence of fine particle in an image, control method of same, and non-transitory computer-readable storage medium |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019084427A JP7421273B2 (en) | 2019-04-25 | 2019-04-25 | Image processing device and its control method and program |
JP2019-084427 | 2019-04-25 | ||
US16/852,883 US11636576B2 (en) | 2019-04-25 | 2020-04-20 | Image processing apparatus for reducing influence of fine particle in an image, control method of same, and non-transitory computer-readable storage medium |
US18/180,298 US20230274398A1 (en) | 2019-04-25 | 2023-03-08 | Image processing apparatus for reducing influence of fine particle in an image, control method of same, and non-transitory computer-readable storage medium |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/852,883 Continuation US11636576B2 (en) | 2019-04-25 | 2020-04-20 | Image processing apparatus for reducing influence of fine particle in an image, control method of same, and non-transitory computer-readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230274398A1 true US20230274398A1 (en) | 2023-08-31 |
Family
ID=72921543
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/852,883 Active 2040-11-13 US11636576B2 (en) | 2019-04-25 | 2020-04-20 | Image processing apparatus for reducing influence of fine particle in an image, control method of same, and non-transitory computer-readable storage medium |
US18/180,298 Pending US20230274398A1 (en) | 2019-04-25 | 2023-03-08 | Image processing apparatus for reducing influence of fine particle in an image, control method of same, and non-transitory computer-readable storage medium |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/852,883 Active 2040-11-13 US11636576B2 (en) | 2019-04-25 | 2020-04-20 | Image processing apparatus for reducing influence of fine particle in an image, control method of same, and non-transitory computer-readable storage medium |
Country Status (3)
Country | Link |
---|---|
US (2) | US11636576B2 (en) |
JP (1) | JP7421273B2 (en) |
CN (1) | CN111935391B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112465715B (en) * | 2020-11-25 | 2023-08-08 | 清华大学深圳国际研究生院 | Image scattering removal method based on iterative optimization of atmospheric transmission matrix |
Family Cites Families (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6411305B1 (en) * | 1999-05-07 | 2002-06-25 | Picsurf, Inc. | Image magnification and selective image sharpening system and method |
JP3697552B2 (en) | 2001-11-05 | 2005-09-21 | 独立行政法人科学技術振興機構 | Method for measuring atmospheric nitrogen dioxide concentration by single wavelength laser-induced fluorescence method and nitrogen dioxide concentration measuring apparatus using the method |
US7724980B1 (en) * | 2006-07-24 | 2010-05-25 | Adobe Systems Incorporated | System and method for selective sharpening of images |
US8340461B2 (en) | 2010-02-01 | 2012-12-25 | Microsoft Corporation | Single image haze removal using dark channel priors |
CN103201615A (en) | 2010-11-05 | 2013-07-10 | 株式会社咀嚼机能研究所 | Imaging device, method for processing images captured by said imaging device, and image capture system |
CN102637293B (en) * | 2011-02-12 | 2015-02-25 | 株式会社日立制作所 | Moving image processing device and moving image processing method |
KR101582478B1 (en) | 2012-05-03 | 2016-01-19 | 에스케이 텔레콤주식회사 | Image processing apparatus for image haze removal and method using that |
KR101901184B1 (en) * | 2012-09-20 | 2018-09-21 | 삼성전자주식회사 | Apparatus and method for processing color image using depth image |
US9659237B2 (en) * | 2012-10-05 | 2017-05-23 | Micro Usa, Inc. | Imaging through aerosol obscurants |
JP6065527B2 (en) | 2012-11-08 | 2017-01-25 | ソニー株式会社 | Fine particle sorting device and fine particle sorting method |
JP6249638B2 (en) | 2013-05-28 | 2017-12-20 | キヤノン株式会社 | Image processing apparatus, image processing method, and program |
WO2015005196A1 (en) | 2013-07-09 | 2015-01-15 | 株式会社日立国際電気 | Image processing device and image processing method |
CN104346774B (en) * | 2013-07-29 | 2018-04-03 | 诺基亚技术有限公司 | Method and apparatus for image enhaucament |
JP6282095B2 (en) * | 2013-11-27 | 2018-02-21 | キヤノン株式会社 | Image processing apparatus, image processing method, and program. |
CN106462947B (en) | 2014-06-12 | 2019-10-18 | Eizo株式会社 | Demister and image generating method |
JP2016173777A (en) * | 2015-03-18 | 2016-09-29 | 株式会社 日立産業制御ソリューションズ | Image processing apparatus |
KR102390918B1 (en) | 2015-05-08 | 2022-04-26 | 한화테크윈 주식회사 | Defog system |
JP6635799B2 (en) * | 2016-01-20 | 2020-01-29 | キヤノン株式会社 | Image processing apparatus, image processing method, and program |
JP6818463B2 (en) * | 2016-08-08 | 2021-01-20 | キヤノン株式会社 | Image processing equipment, image processing methods and programs |
US10269098B2 (en) * | 2016-11-01 | 2019-04-23 | Chun Ming Tsang | Systems and methods for removing haze in digital photos |
EP3334150B1 (en) | 2016-12-06 | 2022-09-07 | Canon Kabushiki Kaisha | Image processing apparatus |
WO2019064825A1 (en) * | 2017-09-27 | 2019-04-04 | ソニー株式会社 | Information processing device, information processing method, control device, and image processing device |
-
2019
- 2019-04-25 JP JP2019084427A patent/JP7421273B2/en active Active
-
2020
- 2020-04-20 US US16/852,883 patent/US11636576B2/en active Active
- 2020-04-23 CN CN202010329292.9A patent/CN111935391B/en active Active
-
2023
- 2023-03-08 US US18/180,298 patent/US20230274398A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
US11636576B2 (en) | 2023-04-25 |
JP2020181399A (en) | 2020-11-05 |
JP7421273B2 (en) | 2024-01-24 |
US20200342575A1 (en) | 2020-10-29 |
CN111935391A (en) | 2020-11-13 |
CN111935391B (en) | 2022-10-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10719730B2 (en) | Image processing apparatus, image processing method, and non-transitory storage medium | |
US10210643B2 (en) | Image processing apparatus, image processing method, and storage medium storing a program that generates an image from a captured image in which an influence of fine particles in an atmosphere has been reduced | |
US9607240B2 (en) | Image processing apparatus, image capturing apparatus, image processing method, image capturing method, and non-transitory computer-readable medium for focus bracketing | |
KR101633377B1 (en) | Method and Apparatus for Processing Frames Obtained by Multi-Exposure | |
US10027897B2 (en) | Image processing apparatus, image processing method, and storage medium | |
US10145790B2 (en) | Image processing apparatus, image processing method, image capturing device and storage medium | |
US10839529B2 (en) | Image processing apparatus and image processing method, and storage medium | |
JP2017138647A (en) | Image processing device, image processing method, video photographing apparatus, video recording reproduction apparatus, program and recording medium | |
JP7449507B2 (en) | Method of generating a mask for a camera stream, computer program product and computer readable medium | |
US11301974B2 (en) | Image processing apparatus, image processing method, image capturing apparatus, and storage medium | |
US11074742B2 (en) | Image processing apparatus, image processing method, and storage medium | |
US9489721B2 (en) | Image processing apparatus, image processing method, and storage medium | |
US20230274398A1 (en) | Image processing apparatus for reducing influence of fine particle in an image, control method of same, and non-transitory computer-readable storage medium | |
CN110290310B (en) | Image processing apparatus for reducing step artifacts from image signals | |
US11842570B2 (en) | Image processing apparatus, image pickup apparatus, image processing method, and storage medium | |
JP6661268B2 (en) | Image processing apparatus, image processing method, and program | |
US11334966B2 (en) | Image processing apparatus, image processing method, and non-transitory computer-readable storage medium | |
JP6494817B2 (en) | Image processing apparatus, image processing method, and program. | |
JP6324192B2 (en) | Image processing apparatus, imaging apparatus, image processing method, and program | |
JP2022069197A (en) | Image processing apparatus, method for controlling the same, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |