CN112967305B - Image cloud background detection method under complex sky scene - Google Patents

Image cloud background detection method under complex sky scene Download PDF

Info

Publication number
CN112967305B
CN112967305B CN202110312071.5A CN202110312071A CN112967305B CN 112967305 B CN112967305 B CN 112967305B CN 202110312071 A CN202110312071 A CN 202110312071A CN 112967305 B CN112967305 B CN 112967305B
Authority
CN
China
Prior art keywords
edge
neighborhood
contour
extracting
morphological
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110312071.5A
Other languages
Chinese (zh)
Other versions
CN112967305A (en
Inventor
朱伟
刘羽
董小舒
邱文嘉
辛付豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Laisi Electronic Equipment Co ltd
Original Assignee
Nanjing Laisi Electronic Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Laisi Electronic Equipment Co ltd filed Critical Nanjing Laisi Electronic Equipment Co ltd
Priority to CN202110312071.5A priority Critical patent/CN112967305B/en
Publication of CN112967305A publication Critical patent/CN112967305A/en
Application granted granted Critical
Publication of CN112967305B publication Critical patent/CN112967305B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image cloud background detection method under a complex sky scene, which aims at an empty application scene of a photoelectric search system, and comprises the steps of firstly constructing a neighborhood convolution kernel template, extracting first and second neighborhood envelope features, and carrying out relevant neighborhood filtering on an original image; secondly, constructing a horizontal, vertical, opposite diagonal and secondary diagonal multidirectional morphological gradient direction structure operator, extracting contour edges of the image, and fusing multidirectional detection results by adaptively constructing a direction edge weight factor to obtain a contour edge image; and finally, carrying out neighborhood connected domain marking on the edge image to realize extraction of cloud background contours.

Description

Image cloud background detection method under complex sky scene
Technical Field
The invention relates to the field of image processing and computer vision, in particular to an image cloud background detection method under a complex sky scene.
Background
With the increasing complexity of modern war environment, the photoelectric searching and tracking system has wider requirements in various battlefield defense scenes due to the characteristics of passivity, uneasiness of environmental influence and the like. Aiming at the problem that a small target is detected by photoelectric paired air and has a thicker cloud layer for shielding the target, the existing solution mainly comprises a neural network, morphological filtering, an interframe difference method and the like, which have good performance when the background changes smoothly, but the detection performance is greatly affected when the frame frequency is low under the complex cloud layer background, and the algorithm false alarm is higher.
Disclosure of Invention
The invention aims to: aiming at the defects of the prior art, the invention provides an image cloud background detection method under a complex sky scene, which comprises the following steps:
step 1, constructing a 5*5 neighborhood convolution kernel template S, extracting relevant first and second neighborhood envelope feature medians, and carrying out neighborhood convolution operation on an original Image1 to obtain a relevant neighborhood filtering Image2;
step 2, constructing a direction gradient edge operator, and extracting morphological direction gradient contour edges of the Image2;
step 3, extracting the contour detail edge to obtain a unidirectional edge detection result, and constructing a fusion edge weight factor lambda i Edge detection and calculation are carried out on the horizontal, vertical, main diagonal and secondary diagonal directions to obtain an edge Image3;
and 4, carrying out connected domain marking on the edge Image3, and extracting to obtain a cloud background connected domain contour.
The step 1 comprises the following steps:
step 1-1, defining 5*5 neighborhood convolution kernel templates as S, wherein the initial value of an S matrix is 0;
step 1-2, calculating a first neighborhood W of a window of a center f (x, y) of the current image 1 Median m of (2) 1 I.e. m 1 =median(W 1 ) Where mean is the median operation of the calculated vector, x is the horizontal coordinate, y is the vertical coordinate, W 1 The method comprises the following steps:
W 1 =[f(x-1,y),f(x,y+1),f(x,y),f(x+1,y),f(x,y-1)]
step 1-3, calculating a second neighborhood W of the current image center f (x, y) window 2 Median m of (2) 2 I.e. m 2 =median(W 2 ),W 2 The method comprises the following steps:
step 1-4, set convolution kernel template center pixel S (2, 2) =max (m 1 ,m 2 ) A neighborhood convolution kernel template S is obtained, where Max is the maximum operation of the calculated vector.
In step 2, the constructing a directional gradient edge operator includes:
respectively establishing a horizontal, vertical, main diagonal and secondary diagonal direction gradient structure operator U 1 、U 2 、U 3 、U 4 Directional gradient structure operator U i The method comprises the following steps:
where i=1, 2,3,4.
In step 2, the step of performing morphological direction gradient contour edge extraction on the Image2 includes:
the morphological direction gradient contour edge extraction is carried out by adopting the following formula:
ZI i =f·U i -fΘU i ·L
wherein ZE i ZI is the result of rough extraction of the edge of the outer contour line i For the inner contour edge coarse extraction result, f is the Image2,for morphological dilation operation, Θ is morphological erosion operation,/->For morphological open operation, for morphological close operation, the structure operator L is:
in step 3, the step of extracting the contour detail edge to obtain a unidirectional edge detection result includes:
extracting the contour detail edge by adopting the following formula to obtain a unidirectional edge detection result:
ZX i =Max(ZE i ,ZI i )-Min(ZE i ,ZI i )
Z i =ZE i +ZI i +ZX i
wherein ZX i For contour detail edges, Z i For the unidirectional edge detection result, min represents the minimum value of the calculation vector.
In step 3, the fusion edge weight factor lambda is constructed i Edge Image3 is obtained by detecting and calculating the edges of the horizontal, vertical, main diagonal and secondary diagonal directions, and the method comprises the following steps:
the fusion edge weight factor lambda is constructed by the following formula i
D i =∑|d i -d 0 |
Wherein D is i Represents the gray level difference value of each direction gradient, and d is when the i value is 1,2,3 and 4 respectively i Pixel values respectively representing horizontal, vertical, primary diagonal, secondary diagonal, d 0 Is the center point pixel value;
extracting the multi-direction edge detection result to generate a final edge detection result Z, namely an edge Image3:
step 4 comprises: traversing the edge Image3, extracting pixel points with pixel values of 1 in the Image3 as seed pixel points, marking all foreground pixel points with adjacent 4 neighborhood or 8 neighborhood pixel values of 1 of the seed pixel points as L, and generating a connected domain contour Q by using all pixel forming areas with the same mark as L 1 Repeating the above steps until the whole image is traversed to obtain the final connected domain vector (Q 1 ,Q 2 ,...Q i ),i∈(1,2,3...),Q i Representing the i-th connected domain contour.
The cloud layer and background edge features are extracted mainly by providing improved morphological gradient edge features, and the cloud layer background region can be effectively detected and the target shielded region can be prejudged by means of morphological gradient fusion self-adaptive weighting in all directions.
The beneficial effects are that: the invention provides an image cloud background detection method under a complex sky scene aiming at a scene of cloud interference of an image on an empty detection target. The method can effectively detect the cloud background by testing various sky backgrounds, and can effectively detect the cloud background by testing and verifying on a domestic FT-2000 platform, and the average frame rate of processing reaches 35Hz.
Drawings
The foregoing and/or other advantages of the invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings and detailed description.
Fig. 1 is a flowchart of image cloud background detection in a complex sky scene according to an embodiment of the present invention.
FIG. 2 is a schematic diagram of a first neighborhood in an embodiment of the present invention.
FIG. 3 is a schematic diagram of a second neighborhood in an embodiment of the present invention.
FIG. 4 is a schematic representation of a morphological directional gradient in an embodiment of the invention.
Fig. 5 is an input image in an embodiment of the invention.
Fig. 6 is a graph showing the result of image edge detection in an embodiment of the present invention.
Fig. 7 shows the contour detection result in the embodiment of the present invention.
Detailed Description
As shown in fig. 1, the present invention provides a method for detecting an image cloud background in a complex sky scene, including:
a) As shown in fig. 5, the infrared image data image1 is acquired, a 5*5 neighborhood convolution kernel template S is constructed, all matrix values are set to 0, and as shown in fig. 2 and 3, a 5*5 pixel matrix fw of the current image center f (x, y) is extracted 1
Computing a first neighborhood W of a current window 1 Median m of = (172,166,178,176,174) 1 Similarly calculate the second neighborhood W of the current window =174 2 Median m of (2) 2 =170, the convolution kernel template center pixel S (2, 2) =max (m 1 ,m 2 ) S (2, 2) =170 can be obtained, where Max is the maximum value operation of the calculated vector, and the neighborhood convolution S operation is performed on the original Image1 traversal to obtain the relevant neighborhood filtered Image2;
b) Constructing a direction gradient edge operator, and respectively constructing a horizontal, vertical, main diagonal and secondary diagonal direction gradient structure operator U as shown in figure 4 for further enhancing the extraction of cloud background edges 1 、U 2 、U 3 、U 4 Directional gradient structure operator U i (i=1, 2,3, 4) is:
morphological direction gradient contour edge extraction includes:
ZI i =f·U i -fΘU i ·L
wherein ZE i ZI is the result of rough extraction of the edge of the outer contour line i The rough extraction result of the inner contour line edge is f is Image2, U i I is a directional gradient edge operator, the values of i are 1,2,3 and 4,for morphological dilation operation, Θ is morphological erosion operation,/->For morphological open operation, for morphological close operation, the structure operator L is:
c) Extracting the contour detail edge to obtain a unidirectional edge detection result Z i
ZX i =Max(ZE i ,ZI i )-Min(ZE i ,ZI i )
Z i =ZE i +ZI i +ZX i
Wherein ZX i For contour detail edges, Z i For the unidirectional edge detection result, max is the maximum value of the calculated vector, and Min is the minimum value of the calculated vector. As shown in fig. 6, the final edge detection result Z is generated by extracting the multi-directional edge detection result, that is, the edge Image3, specifically:
wherein the edge weighting factor lambda is fused i Comprising the following steps:
D i =∑|d i -d 0 |
wherein D is i Represents the gray level difference value of each direction gradient, d i Representing the respective pixel values of the horizontal, vertical, primary diagonal, secondary diagonal, d 0 The values of i are 1,2,3 and 4 for the pixel value of the central point.
d) Traversing the edge Image3, extracting a pixel point with a pixel value of 1 in the Image3 as a seed pixel point, marking L on all foreground pixel points with adjacent 4-neighborhood or 8-neighborhood pixel values of 1, and generating a connected domain Q by using all pixel forming areas with the same mark as L 1 Repeating the above steps until the whole image is traversed to obtain the final connected domain vector (Q 1 ,Q 2 ,...Q i ) I.e. (1, 2, 3.). The final results are shown in FIG. 7.
The invention provides a method for detecting image cloud background in complex sky scene, which has a plurality of methods and approaches for realizing the technical scheme, the above description is only a preferred embodiment of the invention, and it should be noted that, for those skilled in the art, a plurality of improvements and modifications can be made without departing from the principle of the invention, and the improvements and modifications should also be regarded as the protection scope of the invention. The components not explicitly described in this embodiment can be implemented by using the prior art.

Claims (6)

1. The image cloud background detection method under the complex sky scene is characterized by comprising the following steps of:
step 1, constructing a 5*5 neighborhood convolution kernel template S, extracting relevant first and second neighborhood envelope feature medians, and carrying out neighborhood convolution operation on an original Image1 to obtain a relevant neighborhood filtering Image2;
step 2, constructing a direction gradient edge operator, and extracting morphological direction gradient contour edges of the Image2;
step 3, extracting the contour detail edge to obtain a unidirectional edge detection result, and constructing a fusion edge weight factor lambda i Edge detection and calculation are carried out on the horizontal, vertical, main diagonal and secondary diagonal directions to obtain an edge Image3;
step 4, carrying out connected domain marking on the edge Image3, and extracting to obtain a cloud background connected domain contour;
in step 3, the fusion edge weight factor lambda is constructed i Edge Image3 is obtained by detecting and calculating the edges of the horizontal, vertical, main diagonal and secondary diagonal directions, and the method comprises the following steps:
the fusion edge weight factor lambda is constructed by the following formula i
D i =∑|d i -d 0 |
Wherein D is i Represents the gray level difference value of each direction gradient, and d is when the i value is 1,2,3 and 4 respectively i Pixel values respectively representing horizontal, vertical, primary diagonal, secondary diagonal, d 0 Is the center point pixel value;
extracting the multi-direction edge detection result to generate a final edge detection result Z, namely an edge Image3:
2. the method of claim 1, wherein step 1 comprises:
step 1-1, defining 5*5 neighborhood convolution kernel templates as S, wherein the initial value of an S matrix is 0;
step (a)1-2, calculating a first neighborhood W of a window of a center f (x, y) of the current image 1 Median m of (2) 1 I.e. m 1 =median(W 1 ) Where mean is the median operation of the calculated vector, x is the horizontal coordinate, y is the vertical coordinate, W 1 The method comprises the following steps:
W 1 =[f(x-1,y),f(x,y+1),f(x,y),f(x+1,y),f(x,y-1)]
step 1-3, calculating a second neighborhood W of the current image center f (x, y) window 2 Median m of (2) 2 I.e. m 2 =median(W 2 ),W 2 The method comprises the following steps:
step 1-4, set convolution kernel template center pixel S (2, 2) =max (m 1 ,m 2 ) A neighborhood convolution kernel template S is obtained, where Max is the maximum operation of the calculated vector.
3. The method according to claim 2, wherein in step 2, the constructing a directional gradient edge operator includes:
respectively establishing a horizontal, vertical, main diagonal and secondary diagonal direction gradient structure operator U 1 、U 2 、U 3 、U 4 Directional gradient structure operator U i The method comprises the following steps:
where i=1, 2,3,4.
4. A method according to claim 3, wherein in step 2, the step of performing morphological direction gradient contour edge extraction on the Image2 comprises:
the morphological direction gradient contour edge extraction is carried out by adopting the following formula:
ZI i =f·U i -fΘU i ·L
wherein ZE i ZI is the result of rough extraction of the edge of the outer contour line i For the inner contour edge coarse extraction result, f is the Image2,for morphological dilation operation, Θ is morphological erosion operation,/->For morphological open operation, for morphological close operation, the structure operator L is:
5. the method according to claim 4, wherein in step 3, the extracting the contour detail edge obtains a unidirectional edge detection result, including:
extracting the contour detail edge by adopting the following formula to obtain a unidirectional edge detection result:
ZX i =Max(ZE i ,ZI i )-Min(ZE i ,ZI i )
Z i =ZE i +ZI i +ZX i
wherein ZX i For contour detail edges, Z i For the unidirectional edge detection result, min represents the minimum value of the calculation vector.
6. The method according to claim 5Step 4 is characterized in that it comprises: traversing the edge Image3, extracting pixel points with pixel values of 1 in the Image3 as seed pixel points, marking all foreground pixel points with adjacent 4 neighborhood or 8 neighborhood pixel values of 1 of the seed pixel points as L, and generating a connected domain contour Q by using all pixel forming areas with the same mark as L 1 Repeating the above steps until the whole image is traversed to obtain the final connected domain vector (Q 1 ,Q 2 ,...Q i ),i∈(1,2,3...),Q i Representing the i-th connected domain contour.
CN202110312071.5A 2021-03-24 2021-03-24 Image cloud background detection method under complex sky scene Active CN112967305B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110312071.5A CN112967305B (en) 2021-03-24 2021-03-24 Image cloud background detection method under complex sky scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110312071.5A CN112967305B (en) 2021-03-24 2021-03-24 Image cloud background detection method under complex sky scene

Publications (2)

Publication Number Publication Date
CN112967305A CN112967305A (en) 2021-06-15
CN112967305B true CN112967305B (en) 2023-10-13

Family

ID=76278239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110312071.5A Active CN112967305B (en) 2021-03-24 2021-03-24 Image cloud background detection method under complex sky scene

Country Status (1)

Country Link
CN (1) CN112967305B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117853432B (en) * 2023-12-26 2024-08-16 北京长木谷医疗科技股份有限公司 Hybrid model-based osteoarthropathy identification method and device
CN117853932B (en) * 2024-03-05 2024-05-14 华中科技大学 Sea surface target detection method, detection platform and system based on photoelectric pod

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105004737A (en) * 2015-07-14 2015-10-28 浙江大学 Self-adaption improved gradient information-based fruit surface defect detection method
CN105574855A (en) * 2015-12-10 2016-05-11 南京理工大学 Method for detecting infrared small targets under cloud background based on temperate filtering and false alarm rejection
CN105741281A (en) * 2016-01-28 2016-07-06 西安理工大学 Image edge detection method based on neighbourhood dispersion
CN106327522A (en) * 2016-08-25 2017-01-11 上海航天控制技术研究所 Infrared small target detection method based on multi-direction morphological filtering complex cloud background
CN106778499A (en) * 2016-11-24 2017-05-31 江苏大学 A kind of method of quick positioning people's eye iris during iris capturing
CN107578418A (en) * 2017-09-08 2018-01-12 华中科技大学 A kind of indoor scene profile testing method of confluent colours and depth information
CN108109155A (en) * 2017-11-28 2018-06-01 东北林业大学 A kind of automatic threshold edge detection method based on improvement Canny
CN108280823A (en) * 2017-12-29 2018-07-13 南京邮电大学 The detection method and system of the weak edge faults of cable surface in a kind of industrial production
CN111507426A (en) * 2020-04-30 2020-08-07 中国电子科技集团公司第三十八研究所 No-reference image quality grading evaluation method and device based on visual fusion characteristics
CN111681262A (en) * 2020-05-08 2020-09-18 南京莱斯电子设备有限公司 Method for detecting infrared dim target under complex background based on neighborhood gradient
CN112184639A (en) * 2020-09-15 2021-01-05 佛山(华南)新材料研究院 Round hole detection method and device, electronic equipment and storage medium
WO2021012757A1 (en) * 2019-07-23 2021-01-28 南京莱斯电子设备有限公司 Real-time target detection and tracking method based on panoramic multichannel 4k video images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104077773A (en) * 2014-06-23 2014-10-01 北京京东方视讯科技有限公司 Image edge detection method, and image target identification method and device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105004737A (en) * 2015-07-14 2015-10-28 浙江大学 Self-adaption improved gradient information-based fruit surface defect detection method
CN105574855A (en) * 2015-12-10 2016-05-11 南京理工大学 Method for detecting infrared small targets under cloud background based on temperate filtering and false alarm rejection
CN105741281A (en) * 2016-01-28 2016-07-06 西安理工大学 Image edge detection method based on neighbourhood dispersion
CN106327522A (en) * 2016-08-25 2017-01-11 上海航天控制技术研究所 Infrared small target detection method based on multi-direction morphological filtering complex cloud background
CN106778499A (en) * 2016-11-24 2017-05-31 江苏大学 A kind of method of quick positioning people's eye iris during iris capturing
CN107578418A (en) * 2017-09-08 2018-01-12 华中科技大学 A kind of indoor scene profile testing method of confluent colours and depth information
CN108109155A (en) * 2017-11-28 2018-06-01 东北林业大学 A kind of automatic threshold edge detection method based on improvement Canny
CN108280823A (en) * 2017-12-29 2018-07-13 南京邮电大学 The detection method and system of the weak edge faults of cable surface in a kind of industrial production
WO2021012757A1 (en) * 2019-07-23 2021-01-28 南京莱斯电子设备有限公司 Real-time target detection and tracking method based on panoramic multichannel 4k video images
CN111507426A (en) * 2020-04-30 2020-08-07 中国电子科技集团公司第三十八研究所 No-reference image quality grading evaluation method and device based on visual fusion characteristics
CN111681262A (en) * 2020-05-08 2020-09-18 南京莱斯电子设备有限公司 Method for detecting infrared dim target under complex background based on neighborhood gradient
CN112184639A (en) * 2020-09-15 2021-01-05 佛山(华南)新材料研究院 Round hole detection method and device, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于多向背景预测的红外弱小目标检测;张路;张志勇;肖山竹;卢焕章;;信号处理(第11期);全文 *
红外弱小目标检测背景抑制算法研究;金长江;师廷伟;;中国测试(第04期);全文 *
表面弱边缘瑕疵检测算法及应用;蒋洁琦;杨庚;刘沛东;钱晨;;计算机技术与发展(第05期);全文 *

Also Published As

Publication number Publication date
CN112967305A (en) 2021-06-15

Similar Documents

Publication Publication Date Title
CN109685732B (en) High-precision depth image restoration method based on boundary capture
CN111160407B (en) Deep learning target detection method and system
CN106023257B (en) A kind of method for tracking target based on rotor wing unmanned aerial vehicle platform
CN109961506A (en) A kind of fusion improves the local scene three-dimensional reconstruction method of Census figure
CN109086724B (en) Accelerated human face detection method and storage medium
CN110660065B (en) Infrared fault detection and identification algorithm
Cao et al. Infrared small target detection based on derivative dissimilarity measure
CN112967305B (en) Image cloud background detection method under complex sky scene
CN110147816B (en) Method and device for acquiring color depth image and computer storage medium
CN110992378B (en) Dynamic updating vision tracking aerial photographing method and system based on rotor flying robot
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN110245600B (en) Unmanned aerial vehicle road detection method for self-adaptive initial quick stroke width
CN111783834B (en) Heterogeneous image matching method based on joint graph spectrum feature analysis
Fengping et al. Road extraction using modified dark channel prior and neighborhood FCM in foggy aerial images
WO2022233252A1 (en) Image processing method and apparatus, and computer device and storage medium
CN117853510A (en) Canny edge detection method based on bilateral filtering and self-adaptive threshold
CN116883897A (en) Low-resolution target identification method
Varkonyi-Koczy Fuzzy logic supported corner detection
CN113793372A (en) Optimal registration method and system for different-source images
JP5786838B2 (en) Image region dividing apparatus, method, and program
Tian et al. High confidence detection for moving target in aerial video
Bi [Retracted] A Motion Image Pose Contour Extraction Method Based on B‐Spline Wavelet
CN118429388B (en) Visual tracking method and device based on image processing
CN116704268B (en) Strong robust target detection method for dynamic change complex scene
CN110532989B (en) Automatic detection method for offshore targets

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant