CN108764202B - Airport foreign matter identification method and device, computer equipment and storage medium - Google Patents
Airport foreign matter identification method and device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN108764202B CN108764202B CN201810574129.1A CN201810574129A CN108764202B CN 108764202 B CN108764202 B CN 108764202B CN 201810574129 A CN201810574129 A CN 201810574129A CN 108764202 B CN108764202 B CN 108764202B
- Authority
- CN
- China
- Prior art keywords
- image
- detection
- network
- identification
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000001514 detection method Methods 0.000 claims abstract description 187
- 239000013598 vector Substances 0.000 claims abstract description 93
- 230000008569 process Effects 0.000 claims abstract description 17
- 238000012549 training Methods 0.000 claims description 76
- 230000006870 function Effects 0.000 claims description 35
- 238000012545 processing Methods 0.000 claims description 34
- 238000004422 calculation algorithm Methods 0.000 claims description 19
- 230000005540 biological transmission Effects 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 13
- 238000011176 pooling Methods 0.000 claims description 13
- 238000002372 labelling Methods 0.000 claims description 9
- 238000007781 pre-processing Methods 0.000 claims description 9
- 210000001525 retina Anatomy 0.000 claims description 9
- 238000005070 sampling Methods 0.000 claims description 8
- 230000004913 activation Effects 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 4
- 238000010606 normalization Methods 0.000 claims description 3
- 238000005286 illumination Methods 0.000 abstract description 5
- 230000008859 change Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 11
- 239000003086 colorant Substances 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000002207 retinal effect Effects 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 125000001475 halogen functional group Chemical group 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 239000004698 Polyethylene Substances 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 229920003023 plastic Polymers 0.000 description 1
- 239000004033 plastic Substances 0.000 description 1
- -1 polyethylene Polymers 0.000 description 1
- 229920000573 polyethylene Polymers 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 239000002023 wood Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/35—Categorising the entire scene, e.g. birthday party or wedding scene
- G06V20/38—Outdoor scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an airport foreign matter identification method, an airport foreign matter identification device, computer equipment and a storage medium. And acquiring a comparison result by identifying the similarity of the feature vector of the corresponding position of the foreign matter in the image set and the feature vector of the reference feature vector, and finally generating an identification result according to the comparison result. The method can avoid misjudgment of the identification result caused by the influence of the change of the surrounding environment (illumination, shadow and the like) on the detection image, so as to screen out a part of misjudgment samples in the identification process of the airport foreign matters, thereby improving the identification precision of the airport foreign matters.
Description
Technical Field
The invention relates to the field of image recognition, in particular to an airport foreign matter recognition method, an airport foreign matter recognition device, computer equipment and a storage medium.
Background
Various abnormal objects, known as FOD (Foreign Object Debris), often appear in airport runways, and FOD generally refers to some Foreign matter that may damage the aircraft or system, often referred to as runway Foreign matter. FOD in a wide variety of types, such as aircraft and engine connectors (nuts, bolts, washers, fuses, etc.), machine tools, flying objects (nails, personal certificates, pens, pencils, etc.), wildlife, leaves, stones and sand, pavement material, wood, plastic or polyethylene material, paper products, operation area ice ballast, and so on. Both experiments and cases have shown that foreign objects on the airport pavement can be easily sucked into the engine, resulting in engine failure. Debris can also accumulate in the mechanical device and affect the proper operation of the landing gear, flaps, etc.
Due to the development of artificial intelligence, attempts have been made to detect airport foreign objects using deep learning object detection models. However, the existing deep learning object detection models are mainly classified into Two-step detection (Two stage detector) models (FastRCNN, fasternn, etc.) and Single-step detection (Single stage detector) models (FCN, SSD, etc.). Under the condition that the traditional two-step detection model has an extremely low object scene occupation ratio (less than one per thousand), area selection is difficult and the operation speed is low. And is not suitable for scenes with certain real-time requirements. The traditional single-step detection model is not sensitive enough to the tiny objects, and for the tiny objects, the final detection position is easy to deviate.
Disclosure of Invention
In view of the above, it is necessary to provide a method, an apparatus, a computer device, and a storage medium for recognizing airport foreign matters, which can improve the accuracy of airport foreign matter recognition.
An airport foreign matter identification method comprising:
acquiring a detection image, and detecting the detection image by adopting a foreign matter detection model to acquire a detection result;
if the detection result shows that foreign matters exist in the detection image, acquiring the position of the foreign matters in the detection image as a reference position, and extracting a feature vector of the foreign matters according to the reference position to serve as a reference feature vector;
acquiring continuous preset frame images according to the detection image to form an identification image set;
extracting a feature vector of each identification image in the identification image set according to the reference position, and comparing the feature vector of each identification image with the feature vector similarity of the reference feature vector to obtain a comparison result;
and generating a recognition result according to the comparison result, wherein the recognition result comprises the confirmed foreign matters and the confirmed non-foreign matters.
An airport foreign matter identification apparatus comprising:
the detection result acquisition module is used for acquiring a detection image, and detecting the detection image by adopting a foreign matter detection model to acquire a detection result;
a reference feature vector acquisition module, configured to, if the detection result indicates that a foreign object exists in the detected image, acquire a position of the foreign object in the detected image, as a reference position, and extract a feature vector of the foreign object according to the reference position, as a reference feature vector;
the identification image set forming module is used for obtaining continuous preset frame images according to the detection images to form an identification image set;
the comparison result acquisition module is used for extracting the feature vector of each identification image in the identification image set according to the reference position, and comparing the feature vector of each identification image with the feature vector similarity of the reference feature vector to acquire a comparison result;
and the identification result acquisition module is used for generating an identification result according to the comparison result, wherein the identification result comprises the confirmed foreign matters and the confirmed non-foreign matters.
A computer arrangement comprising a memory, a processor and a computer program stored in said memory and being executable on said processor, said processor implementing the steps of the above-mentioned airport foreign object identification method when executing said computer program.
A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned airport foreign object identification method.
In the airport foreign matter identification method, the airport foreign matter identification device, the computer equipment and the storage medium, when the foreign matter detection model detects that foreign matters exist in a detection image, a recognition image set is formed by acquiring continuous preset frame images of the detection image. And obtaining a comparison result by identifying the similarity between the feature vector of the corresponding position of the foreign matter in the image set and the feature vector of the reference feature vector, and finally generating an identification result according to the comparison result. The method can avoid misjudgment of the identification result caused by the influence of the change (illumination, shadow and the like) of the surrounding environment on the detection image so as to screen out a part of misjudged samples in the process of identifying the foreign matters, thereby improving the identification precision of the airport foreign matters.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic diagram of an application environment of an airport foreign object identification method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an example of a method for identifying airport foreign objects in accordance with an embodiment of the present invention;
fig. 3 is a diagram illustrating an exemplary step S10 of the airport foreign object identification method according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating an example of step S11 of the airport foreign object identification method according to an embodiment of the present invention;
fig. 5 is a diagram illustrating an exemplary step S12 of the airport foreign object identification method according to an embodiment of the present invention;
FIG. 6 is a schematic block diagram of an airport foreign object identification apparatus in an embodiment of the present invention;
FIG. 7 is a schematic diagram of a computer device according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
The airport foreign object identification method provided by the application can be applied to the application environment shown in fig. 1, wherein a client (computer device) communicates with a server through a network. The client sends the detection image to the server, and the server identifies the detection image to generate an identification result. The client (computer device) may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, video capture devices, and portable wearable devices. The server can be implemented by an independent server or a server cluster composed of a plurality of servers.
In an embodiment, as shown in fig. 2, a method for identifying airport alien materials is provided, which is described by taking the method applied to the server in fig. 1 as an example, and includes the following steps:
s10: and acquiring a detection image, and detecting the detection image by adopting a foreign matter detection model to acquire a detection result.
The detection image is formed by dividing video data in a monitoring video of an airport into images of predetermined frames according to a certain time interval. Preferably, the detection images are sequenced according to time sequence, and then the foreign matter detection model is adopted to detect the detection images to obtain the detection result. Alternatively, the detection result includes both the presence and absence of the foreign object in the detection image. The foreign object detection model is a pre-trained recognition model, and optionally, the foreign object detection model may be implemented by a Two-step detection (Two stage detector) model (FastRCNN, fasternn, etc.) or a Single-step detection (Single stage detector) model (FCN, SSD, etc.). And detecting the detection image through a pre-trained foreign body detection model, and outputting a detection result by the foreign body detection model.
S20: and if the detection result is that the foreign matter exists in the detection image, acquiring the position of the foreign matter in the detection image as a reference position, and extracting the feature vector of the foreign matter according to the reference position as a reference feature vector.
And detecting the detection image by using a foreign matter detection model, acquiring the position of the foreign matter in the detection image if the foreign matter detection model outputs that the foreign matter exists in the detection image, and extracting a corresponding feature vector as a reference feature vector based on the position. Specifically, the detected foreign objects may be first scaled to a predetermined size (e.g., 32 × 32) and then the feature vectors may be extracted as the reference feature vectors. Alternatively, a color Histogram and a Histogram of Oriented Gradients (HOG) of the foreign matter may be extracted to constitute the reference feature vector.
S30: and acquiring continuous preset frame images according to the detection image to form an identification image set.
Based on the detection image, corresponding continuous preset frame images are obtained to form an identification image set. The continuous predetermined frame image is a predetermined frame image adjacent to and continuous with the detection image in the video data in which the detection image is located. For example, the next 20 frames of images corresponding to the detection image are acquired to form an identification image set.
S40: and extracting the characteristic vector of each identification image in the identification image set according to the reference position, and comparing the characteristic vector similarity of each identification image with the characteristic vector similarity of the reference characteristic vector to obtain a comparison result.
And acquiring a feature vector of each identification image in the identification image set according to the reference position, and comparing the feature vector of each identification image with the reference feature vector to acquire the similarity of the feature vectors. Specifically, an algorithm such as a minuscule distance, an euclidean distance, or a mahalanobis distance may be used to calculate the feature vector similarity between the feature vector of each recognition image and the reference feature vector. The feature vector similarity obtained by calculation is compared with a preset similarity threshold, and a comparison result is obtained, which may specifically be: similar and dissimilar. For example: when the similarity of the feature vectors is greater than or equal to the similarity threshold, the comparison result is similar; and when the similarity of the feature vectors is smaller than the similarity threshold value, the comparison result is not similar.
S50: and generating a recognition result according to the comparison result, wherein the recognition result comprises the confirmed foreign matters and the confirmed non-foreign matters.
And counting the comparison result of each identification image in the identification image set, and determining that the identification result is a foreign object when the number of similarities in the comparison result is greater than or equal to a judgment threshold value. And when the number of similarities in the comparison result is less than the judgment threshold value, the identification result is that the foreign matter is confirmed. The determination threshold may be set by the number of images in the recognition image set, for example, 60%, 80%, or 90% of the number of images in the recognition image set.
In the detection image, there is a possibility that a shadow is formed in the detection image due to the influence of a change in the surrounding environment (illumination, shadow, or the like). When the detection image is detected by using the foreign object detection model, it is possible to recognize the shadow as a foreign object. When the detection result of the foreign matter detection model is that the foreign matter exists, the influence of the shadow existing in the detection image on the identification result is eliminated by further comparing the similarity of the feature vectors at the corresponding positions in the continuous preset frame images of the detection image.
In the present embodiment, when it is detected by the foreign object detection model that a foreign object exists in a detection image, a recognition image set is composed by acquiring consecutive predetermined frame images of the detection image. And obtaining a comparison result by identifying the similarity between the feature vector of the corresponding position of the foreign matter in the image set and the feature vector of the reference feature vector, and finally generating an identification result according to the comparison result. The method can avoid misjudgment of the identification result caused by the influence of the change of the surrounding environment (illumination, shadow and the like) on the detection image, so as to screen out a part of misjudgment samples in the process of identifying the foreign matters, thereby improving the identification precision of the foreign matters in the airport.
In an embodiment, as shown in fig. 3, detecting the detection image by using the foreign object detection model to obtain the detection result includes the following steps:
s11: and preprocessing the detection image to obtain an image to be identified.
The preprocessing of the detection image refers to the enhancement processing of the detection image so as to improve the subsequent detection precision. When acquiring a detection image, there are many factors that affect the detection image, such as: the non-uniform illuminance, the limitation of the acquisition equipment, the difference of the acquisition environment and the like all cause the insufficient definition of the detected image, and the subsequent reduction of the identification precision is caused. Therefore, the detection image is preprocessed in the step, so that the subsequent detection precision is improved. Optionally, the detection image may be subjected to global enhancement or local enhancement processing by using an image enhancement algorithm, and then the enhanced detection image is subjected to sharpening processing to obtain an image to be identified. Preferably, the image enhancement algorithm may be a multi-scale retina algorithm, an adaptive histogram equalization algorithm, or an optimized contrast algorithm, etc. And obtaining the image to be identified by performing AND processing on the detection image.
S12: and inputting the image to be recognized into a full-difference pyramid feature network recognition model for recognition, and acquiring a classification confidence map.
The total difference-pyramid feature network identification model is a neural network identification model formed by adopting a total difference (DenseNet) as an encoding network and a pyramid feature (RefineNet) as a decoding network according to an encoding-decoding model.
Specifically, the full-differential network is formed by splicing networks of different layers in a neural network model, so that the input of each layer network comprises the output of all the previous layer networks, and thus, the loss of tiny objects in the process of upsampling of the model can be avoided. The transmission efficiency of information and gradient in the network can be improved by the total difference network, each layer can directly obtain the gradient from a loss function, and an input signal is directly obtained, so that a deeper network can be trained, the network structure also has a regularization effect, and the network performance is improved by the total difference network from the characteristic reuse angle. Therefore, the adoption of the full-difference network not only reduces the phenomenon that tiny objects are lost in the process of sampling on the model, but also improves the training speed and reduces the phenomenon of overfitting.
The pyramid feature network is a multipath improved network, extracts all information in the down sampling process, and obtains a high-resolution prediction network by using a long-distance network connection. The pyramid feature network uses fine-layer features, so that high-layer semantic information can be improved. The pyramid feature network uses more RCUs (residual connection units), so that short-range connection is formed inside the pyramid feature network, and the pyramid feature network is beneficial to training. In addition, the pyramid feature network and the total difference network form long-range connection, so that the gradient can be effectively transmitted to the whole network, the influence of the bottom layer features on the final result is increased, and the positioning accuracy of the object (airport foreign matter) is effectively improved.
The classification confidence map is an image which is displayed by labeling different types of images in different modes after the images to be recognized are detected. Alternatively, different colors may be used to distinguish different categories in the image to be recognized. For example: objects that may appear in the detection image are runways, lawns, airport equipment (non-foreign objects), airport foreign objects, and the like. Therefore, different colors can be imparted to the above-described different kinds of objects in advance. After the image to be recognized is input into the total difference-pyramid feature network recognition model for recognition, the total difference-pyramid feature network recognition model combines different judgment results of different areas in the image to be recognized and gives different colors in advance to form a classification confidence map.
In one embodiment, airport foreign objects may also be labeled with more specific objects, such as: engine attachments (nuts, screws, washers, fuses, etc.), machine tools, flying objects (nails, personal certificates, pens, pencils, etc.), animals, and the like. And the types of the airport foreign matters are assigned to, so that the specific types of the airport foreign matters can be further determined when the airport foreign matters are identified, and appropriate treatment measures can be conveniently made.
S13: and obtaining a detection result according to the classification confidence map, wherein the detection result comprises the existence of foreign matters in the detection image and the absence of foreign matters in the detection image.
After the classification confidence map is acquired, detection results including the presence of foreign matter in the detection image and the absence of foreign matter in the detection image may be acquired according to different colors on the classification confidence map. For example, if the airport foreign object is set to be red in the preset setting, after the classification confidence map is acquired, it is determined whether or not a red region exists in the classification confidence map, and a different detection result is obtained. If the red area exists in the classification confidence map, the detection image is indicated to have the foreign matter, and the detection result is that the foreign matter exists in the detection image. If the red area does not exist in the classification confidence map, the fact that the foreign matter does not exist in the detected image is indicated, and the detection result is that the foreign matter exists in the detected image. Optionally, the detection result may be embodied in a form of text, voice, or a signal light, or may be a combination of at least two of text, voice, or a signal light. For example, when the detection result is that a foreign object exists in the detected image, a voice prompt can be sent, and a warning light is adopted to prompt so as to better remind relevant personnel of processing.
In one embodiment, when a red area exists in the classification confidence map, the position information of the airport foreign object may also be acquired, and in this case, the detection result further includes the position information of the airport foreign object. Specifically, each image to be recognized may be assigned with a recognition identifier in advance, which is used to locate the image source of the image to be recognized, for example, the image source is captured by the camera device to which the recognition identifier is located. Therefore, when the classification confidence map has a red area, the position of the red area in the image to be recognized can be obtained, and the actual position of the airport foreign matter corresponding to the red area in the airport can be obtained by combining the recognition identifier of the image to be recognized.
In the embodiment, the image to be identified is obtained by preprocessing the detection image, so that the subsequent detection precision is improved. And the identification model of the total difference-pyramid characteristic network is adopted to identify the image to be identified, so that the identification precision and the positioning precision of the tiny objects in the identification process are ensured, and the identification efficiency is also improved.
In an embodiment, as shown in fig. 4, in step S11, that is, preprocessing the detection image to obtain an image to be recognized, the method specifically includes the following steps:
s111: and carrying out global enhancement processing on the detection image by adopting a multi-scale retina algorithm.
The Multi-Scale retina (MSR) algorithm is an image enhancement processing algorithm, and is used for reducing the influence of various factors (such as interference noise, edge detail loss, and the like) of an unprocessed original image. And enhancing the detected image by adopting a multi-scale retinal algorithm, removing the illumination image of the detected image, reserving the reflected image, and adjusting the gray dynamic range of the detected image to obtain the reflection information of the reflected image corresponding to the detected image, thereby achieving the enhancement effect.
Preferably, the global enhancement processing is performed on the detection image by using a multi-scale retinal algorithm, and specifically includes:
and performing global enhancement processing on the detection image by adopting the following formula:
wherein N is of scaleThe number, (x, y) is the coordinate value of the pixel of the detection image, G (x, y) is the input of the multi-scale retina algorithm, namely the gray value of the detection image, R (x, y) is the output of the multi-scale retina algorithm, namely the gray value of the detection image after the global enhancement processing, w n Is a scale weight factor with the constraint ofF n (x, y) is the nth center-surround function, and the expression is:
in the formula, σ n As a scale parameter of the n-th central surround function, coefficient K n Must satisfy:
specifically, the gray value G (x, y) of the detection image is obtained by an image information obtaining tool, and the scale parameter sigma of the n input center surrounding functions is determined according to the input scale parameter sigma n Is determined to satisfyK of n Then the center surround function F n And (x, y) and G (x, y) are calculated according to the following formula to obtain the gray value R (x, y) of the detection image after the global enhancement processing:
wherein σ n Determining the size of the neighbourhood of the central surround function, the size of which determines the quality of the detected image, σ n When the value is larger, the selected neighborhood range is larger, the influence of the pixels of the detected image on the surrounding pixels is smaller, and the local details of the detected image are highlighted.
In a specific embodiment, the number n =3 of the selected scales is set accordingly:
σ 1 =30,σ 2 =110,σ 3 =200;
wherein σ 1 、σ 2 And σ 3 Respectively corresponding to gray value intervals [0,255 ] of the detected image]Low gray, middle gray, and high gray, and set w 1 =w 2 =w 3 And (4) =1/3. According to the setting of the parameters, the multi-scale retina algorithm simultaneously considers 3 gray scales of low gray scale, medium gray scale and high gray scale, thereby obtaining better effect. The multi-scale retina algorithm can realize good adaptivity by combining a plurality of scales, highlight the texture details of dark regions of the image, and realize the adjustment of the dynamic range of the image so as to achieve the aim of enhancing the image.
S112: and sharpening the detected image after the global enhancement processing by adopting a Laplace operator to obtain an image to be identified.
The Laplacian operator (Laplacian operator) is a second order differential operator and is suitable for improving the blurring of an image due to the diffuse reflection of light. The image is subjected to the laplacian sharpening transformation, so that the blurring of the image can be reduced, and the definition of the image is improved. Therefore, the edge detail features of the detection image after the global enhancement processing are highlighted by sharpening the detection image after the global enhancement processing, and the outline definition of the detection image after the global enhancement processing is improved. The sharpening process refers to a transformation for sharpening an image to strengthen the boundary of an object and the details of the image in the image. After the detection image subjected to the global enhancement processing is subjected to the sharpening processing by the Laplacian operator, the halo is weakened while the detail features of the image edge are enhanced, and therefore the details of the detection image are protected.
The laplacian based on the second order differential is defined as:
for the detection image R (x, y) after the global enhancement processing, the second derivative thereof is:
obtaining Laplace operatorThereafter, using Laplace operator>And sharpening each pixel gray value of the detection image R (x, y) after the global enhancement processing according to the following formula to obtain a sharpened pixel gray value, wherein g (x, y) is the sharpened pixel gray value.
And replacing the gray value of the sharpened pixel with the gray value of the original (x, y) pixel to obtain the image to be identified.
In one embodiment, the Laplacian operatorSelecting a four-neighbor domain sharpening template matrix H:
and adopting a four-neighborhood sharpening template matrix H to sharpen the detection image subjected to the global enhancement processing by using a Laplacian operator.
In the embodiment, the overall enhancement processing is performed on the detected image by adopting the multi-scale retinal algorithm, the detected image subjected to the enhancement processing of the multi-scale retinal algorithm is sharpened by adopting the laplacian operator, and the halo is weakened while the detail characteristics of the image edge are enhanced, so that the details of the detected image are protected. In addition, the steps are simple and convenient, the edge detail features of the image to be recognized obtained after processing are more obvious, the texture features of the image to be recognized are enhanced, and the accuracy of recognizing the image to be recognized is improved.
In an embodiment, as shown in fig. 5, before the step of inputting the image to be recognized into the full-difference-pyramid feature network recognition model for recognition and obtaining the classification confidence map, the airport foreign object recognition method further includes:
s121: and acquiring a training sample set, and carrying out classification and labeling on training images in the training sample set.
The training sample set comprises training images, and the training images are sample images used for training a full-difference pyramid feature network recognition model. Optionally, the training image may be obtained by setting video capture devices or image capture devices at different locations of the airport to capture corresponding data, and the video capture devices or the image capture devices capture the corresponding data and then send the data to the server. If the server side acquires video data, the video data can be subjected to framing processing according to a preset frame rate to obtain a training image. The step of performing classification and labeling on the training image refers to classifying different objects in the training image. For example, objects that may appear in the training image are runways, lawns, airport equipment (non-alien) and airport alien objects. Different labeling information is given to different objects in the training images, so that the classification and labeling of the training images are completed.
S122: and training a total difference network by adopting the training images labeled by the training sample set in a classified manner to obtain a target output vector.
In this step, a total difference network is trained using training images labeled by classification in a training sample set, and in the total difference network, the input of the training images is setInto is x 0 The total difference network is composed of L layers of structures, and each layer of total difference network comprises a nonlinear transformation H l (. Cndot.). Alternatively, the nonlinear transformation may include ReLU (activated function) and Pooling, or BN (normalized layer), reLU and convolutional layer, or BN, reLU and Pooling. The BN adjusts the distribution of the input values of any neuron of each layer of neural network to the standard normal distribution with the mean value of 0 and the variance of 1 through a standardization means, so that the activation input values fall in a region where a nonlinear function is sensitive to input, the gradient is enlarged, the problem of gradient disappearance is avoided, and the training speed can be greatly accelerated. ReLU is a piecewise linear function, and is a one-sided suppression function, and all negative values of the input can be output as 0, while the positive values of the input are kept unchanged. Through the ReLU, the sparse model can better mine relevant features and fit training data.
In this embodiment, let the output of the l-th layer in the total difference network be x l Each layer in the total difference network is directly connected with all the previous layers, namely:
x l =H l ([x 0 ,x 1 ,...,x l-1 ]);
and the output of the corresponding layer in the full-difference network forms a target output vector for the pyramid feature network to be trained by adopting the target output vector subsequently.
S123: and training the pyramid feature network by adopting the target output vector to obtain a total difference-pyramid feature network identification model.
In the pyramid feature network, each layer of output in the target output vector in the full-differential network is respectively connected with the RCU unit of the pyramid feature network. Namely, the RCU units with the same number of layers as the target output vector in the full-difference network exist in the pyramid feature network.
The RCU unit refers to a unit structure extracted from a full difference network, and specifically comprises a ReLU, a convolution and a summation part and the like. The target output vectors of all layers acquired in the full-difference network are respectively subjected to ReLU, convolution and summation operation. Each layer of output of the RCU unit is processed by Multi-resolution fusion, so that different output characteristic diagrams are obtained: the output characteristic diagram of each layer of the RCU unit is subjected to self-adaptive processing by using a convolution layer, and then the maximum resolution of the layer is up-sampled. The chain residual pooling upsamples the input output feature maps of different resolutions to the same size as the maximum output feature map and then superimposes them. And finally, convolving the superposed output characteristic diagram by an RCU to obtain a fine characteristic diagram.
The pyramid feature network has the function of fusing feature maps with different resolutions. Dividing a pre-trained total difference network into a plurality of total difference blocks according to the resolution of the feature map, then respectively taking the plurality of blocks right as a plurality of paths to be fused through a pyramid feature network, and finally obtaining a fine feature map (subsequently connected with a softmax layer and then output through bilinear interpolation).
In the pyramid feature network, training the pyramid feature network through a target output vector in a total difference network to form a primary training network, verifying and adjusting the pyramid feature network by adopting a verification sample until a preset classification accuracy is obtained, and finishing training. The preset classification accuracy can be set according to the actual requirement of the recognition model.
In this embodiment, the all-difference pyramid feature network recognition model is obtained by training with the training sample set after the classification and labeling, so that the recognition accuracy and speed of the all-difference pyramid feature network recognition model are ensured.
In an embodiment, training the ensemble network specifically includes:
and setting an initial convolution layer of the total difference network, and performing down-sampling by adopting a maximum pooling layer in the total difference network.
The convolutional layer is used for feature extraction of the input image, and the initial convolutional layer is extracted as a feature of the training image, and optionally, the initial convolutional layer adopts a convolutional kernel of 7*7. And (4) adopting a maximum pooling layer in the total difference network to perform downsampling, wherein the downsampling is performed if the new sampling rate is less than the original sampling rate in the sampling process. Maximum pooling (max-pooling) refers to the sampling function taking the maximum of all neurons within a region. The input image passing through the initial convolution layer is subjected to maximum pooling processing, feature compression is carried out, main features are extracted, and the network calculation complexity is simplified.
And arranging three layers of all-difference network modules, wherein each all-difference network module comprises an all-difference convolution layer and an all-difference active layer, and the active function in the all-difference active layer adopts a linear active function.
In the three layers of all-differential network modules, the output of each all-differential network module is the combination of the outputs of all the previous modules, namely:
x l =H l ([x 0 ,x 1 ,...,x l-1 ]),l=1,2,3;
wherein each H l (. H) is a combination of both convolutional and active layer operations: conv- > ReLU. Optionally, the convolution kernel size in the homodyne convolutional layer is 3*3. Each H l The characteristic quantity of the output (is) is the characteristic growth rate, and optionally, the characteristic growth rate is set to be 16, and the output characteristic quantity of the output of the three-layer full-differential network module is 48. And the linear activation function is formulated as:
the time of the training process can be made to converge quickly by the conversion of the linear activation function.
And arranging transmission layers among the full-difference network modules, wherein each transmission layer comprises a normalization layer, a transmission activation layer and an average pooling layer.
In the all-differential network module, the output characteristic of each all-differential network module is increased, and in the above arrangement, if the characteristic increase rate is 16, the output characteristic of the output of the three-layer all-differential network module is 48. In this way, the amount of calculation is increased step by step, and therefore, a transmission layer is introduced, and a transmission parameter is set to indicate how many times the input of the transmission layer is reduced. Illustratively, the transmission parameter is 0.6, i.e., the input of the transmission layer is reduced to 0.6.
In the embodiment, the training speed and the training precision of the total difference network are ensured by setting the network structure and each parameter in the total difference network.
In one embodiment, in the process of training the all-difference pyramid feature network recognition model, the Loss function is implemented by using a Focal local function:
FL(p t )=-(1-p t ) γ log(p t );
wherein,p t is the predicted value of the total difference-pyramid characteristic network recognition model to the training image, p is the estimated probability of the model to the training image y =1, and belongs to [0,1 ]]Y is the labeled value of the training image, and gamma is the adjusting parameter.
A loss function refers to a function that maps an event (an element in a sample space) to a real number expressing the economic or opportunity cost associated with its event. In this embodiment, when training the all-difference-pyramid feature network recognition model, a loss function is used to measure the prediction performance of the all-difference-pyramid feature network recognition model, and the smaller the loss function is, the better the prediction performance of the recognition model is. In the embodiment of the invention, the number of the sample images of each classification in the training images of the training sample set may be unbalanced, and particularly, the training images containing airport foreign matters may be fewer, so that the loss function is selected to better improve the prediction capability of the all-difference pyramid feature network recognition model.
Therefore, the Loss function is realized by adopting a Focal local function, and the Focal local function is increased by an adjusting factor (1-p) t ) γ Wherein the value of the adjusting parameter gamma is [0,5]In the meantime. y is the label value of the training image, for example, for the label of the foreign object in the training image, y =1 if the foreign object is, and y = -1 if the foreign object is not. When a training image is misclassified, P t Very small, at this time the regulatory factor (1-p) t ) γ Close to 1, the loss will not beA great influence is caused; when P is present t The value of the adjustment factor approaches 0 as the value approaches 1, and therefore the loss value for correctly classified samples is reduced.
In this embodiment, the Focal local function is adopted in the process of training the all-difference pyramid feature network identification model, so that the influence of non-uniform classification samples on the training of the all-difference pyramid feature network identification model can be reduced, and the effect of improving the subsequent detection precision is achieved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In one embodiment, an airport foreign matter recognition apparatus is provided, which corresponds to the airport foreign matter recognition method in the above-described embodiment one to one. As shown in fig. 6, the airport foreign object recognition apparatus includes a detection result acquisition module 10, a reference feature vector acquisition module 20, a recognition image set composition module 30, a comparison result acquisition module 40, and a recognition result acquisition module 50. The functional modules are explained in detail as follows:
and the detection result acquisition module 10 is configured to acquire a detection image, detect the detection image by using a foreign object detection model, and acquire a detection result.
And a reference feature vector acquisition module 20, configured to, if the detection result is that a foreign object exists in the detected image, acquire a position of the foreign object in the detected image as a reference position, and extract a feature vector of the foreign object according to the reference position as a reference feature vector.
And an identification image set composing module 30, configured to acquire consecutive predetermined frame images according to the detection image, and compose an identification image set.
And the comparison result obtaining module 40 is configured to extract a feature vector of each identification image in the identification image set according to the reference position, and compare the feature vector of each identification image with the feature vector similarity of the reference feature vector to obtain a comparison result.
And the identification result acquisition module 50 is used for generating an identification result according to the comparison result, wherein the identification result comprises the confirmed foreign matter and the confirmed non-foreign matter.
Preferably, the detection result acquisition module 10 includes an image to be recognized acquisition unit, a classification confidence map acquisition unit, and a detection result acquisition unit.
And the image to be recognized acquiring unit is used for preprocessing the detection image to obtain the image to be recognized.
And the classification confidence map acquisition unit is used for inputting the image to be recognized into the full-difference pyramid feature network recognition model for recognition and acquiring a classification confidence map.
And the detection result acquisition unit is used for acquiring a detection result according to the classification confidence map, wherein the detection result comprises the existence of the foreign matters in the detection image and the nonexistence of the foreign matters in the detection image.
Preferably, the image acquiring unit to be identified comprises a global enhancement processing subunit and a sharpening processing subunit.
And the global enhancement processing subunit is used for performing global enhancement processing on the original image by adopting a multi-scale retina algorithm.
And the sharpening processing unit is used for sharpening the original image after the global enhancement processing by adopting a Laplacian operator to obtain an image to be identified.
Preferably, the airport foreign matter recognition device further comprises a training sample set acquisition module, a target output vector acquisition module and a recognition model acquisition module.
And the training sample set acquisition module is used for acquiring a training sample set and carrying out classification and labeling on training images in the training sample set.
And the target output vector acquisition module is used for training the full-differential network by adopting the training images labeled by the training sample set in a classified manner to obtain a target output vector.
And the identification model acquisition module is used for training the pyramid feature network by adopting the target output vector to obtain a total difference-pyramid feature network identification model.
For specific limitations of the airport foreign object identification device, reference may be made to the above limitations of the airport foreign object identification method, which are not described herein again. The modules in the airport foreign object recognition device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure thereof may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing the detection image and the foreign object detection model data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an airport foreign object identification method.
In one embodiment, there is provided a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
and acquiring a detection image, and detecting the detection image by adopting a foreign matter detection model to acquire a detection result.
And if the detection result is that the foreign matter exists in the detection image, acquiring the position of the foreign matter in the detection image as a reference position, and extracting a feature vector of the foreign matter according to the reference position as a reference feature vector.
And acquiring continuous preset frame images according to the detection image to form an identification image set.
And extracting the characteristic vector of each identification image in the identification image set according to the reference position, and comparing the characteristic vector similarity of each identification image with the characteristic vector similarity of the reference characteristic vector to obtain a comparison result.
And generating a recognition result according to the comparison result, wherein the recognition result comprises the confirmed foreign matters and the confirmed non-foreign matters.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, performs the steps of:
and acquiring a detection image, and detecting the detection image by adopting a foreign matter detection model to acquire a detection result.
And if the detection result is that the foreign matter exists in the detection image, acquiring the position of the foreign matter in the detection image as a reference position, and extracting a feature vector of the foreign matter according to the reference position as a reference feature vector.
And acquiring continuous preset frame images according to the detection image to form an identification image set.
And extracting the characteristic vector of each identification image in the identification image set according to the reference position, and comparing the characteristic vector similarity of each identification image with the characteristic vector similarity of the reference characteristic vector to obtain a comparison result.
And generating a recognition result according to the comparison result, wherein the recognition result comprises the confirmed foreign matters and the confirmed non-foreign matters.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct Rambus Dynamic RAM (DRDRAM), and Rambus Dynamic RAM (RDRAM), among others.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein.
Claims (7)
1. An airport foreign matter identification method, comprising:
acquiring a detection image, and preprocessing the detection image to obtain an image to be identified;
inputting the image to be recognized into a full-difference pyramid feature network recognition model for recognition, and acquiring a classification confidence map;
obtaining a detection result according to the classification confidence map, wherein the detection result comprises the existence of foreign matters in the detection image and the absence of foreign matters in the detection image;
if the detection result shows that foreign matters exist in the detection image, acquiring the position of the foreign matters in the detection image as a reference position, and extracting a feature vector of the foreign matters according to the reference position to serve as a reference feature vector;
acquiring continuous preset frame images according to the detection image to form an identification image set;
extracting a feature vector of each identification image in the identification image set according to the reference position, and comparing the feature vector of each identification image with the feature vector similarity of the reference feature vector to obtain a comparison result;
generating recognition results according to the comparison result, wherein the recognition results comprise foreign matters and non-foreign matters;
before the step of inputting the image to be recognized into the all-difference pyramid feature network recognition model for recognition and acquiring the classification confidence map, the method comprises the following steps:
acquiring a training sample set, and carrying out classification and labeling on training images in the training sample set;
training a total difference network by adopting the training images labeled in the training sample set in a classifying way to obtain a target output vector;
training a pyramid feature network by using the target output vector to obtain the total difference-pyramid feature network identification model;
wherein the training of the homodyne network comprises:
setting an initial convolution layer of the full-difference network, and adopting a maximum pooling layer in the full-difference network to perform down-sampling;
setting three layers of all-difference network modules, wherein each all-difference network module comprises an all-difference convolution layer and an all-difference active layer, and the active function in the all-difference active layer adopts a linear active function;
and arranging transmission layers among the full-difference network modules, wherein each transmission layer comprises a normalization layer, a transmission activation layer and an average pooling layer.
2. The method for identifying airport foreign objects according to claim 1, wherein said preprocessing the detection image to obtain an image to be identified specifically comprises:
carrying out global enhancement processing on the detection image by adopting a multi-scale retina algorithm;
and sharpening the detected image after the global enhancement processing by adopting a Laplace operator to obtain an image to be identified.
3. The airport foreign object recognition method of claim 1, wherein in the process of training the all-difference-pyramid feature network recognition model, the Loss function is implemented by using a Focal local function:
FL(p t )=-(1-p t ) γ log(p t );
wherein,p t is the predicted value of the total difference-pyramid feature network recognition model to the training image, p is the estimated probability of the model to the training image y =1, and belongs to [0,1 ]]Y is the labeled value of the training image, and gamma is the adjusting parameter.
4. An airport foreign matter recognition device, comprising:
the detection result acquisition module is used for acquiring a detection image and preprocessing the detection image to obtain an image to be identified; inputting the image to be recognized into a full-difference pyramid feature network recognition model for recognition, and acquiring a classification confidence map; obtaining a detection result according to the classification confidence map, wherein the detection result comprises the existence of foreign matters in the detection image and the absence of foreign matters in the detection image;
a reference feature vector acquisition module, configured to, if the detection result indicates that a foreign object exists in the detected image, acquire a position of the foreign object in the detected image as a reference position, and extract a feature vector of the foreign object according to the reference position as a reference feature vector;
the identification image set forming module is used for obtaining continuous preset frame images according to the detection images to form an identification image set;
the comparison result acquisition module is used for extracting the feature vector of each identification image in the identification image set according to the reference position, and comparing the feature vector of each identification image with the feature vector similarity of the reference feature vector to acquire a comparison result;
the identification result acquisition module is used for generating an identification result according to the comparison result, and the identification result comprises the confirmed foreign matters and the confirmed non-foreign matters;
the detection result acquisition module is further used for acquiring a training sample set and carrying out classification and labeling on training images in the training sample set; training a total difference network by adopting the training images labeled in the training sample set in a classifying way to obtain a target output vector; training a pyramid feature network by using the target output vector to obtain the total difference-pyramid feature network identification model;
the detection result acquisition module is further configured to set an initial convolution layer of the total difference network, and perform downsampling by using a maximum pooling layer in the total difference network; setting three layers of all-differential network modules, wherein each all-differential network module comprises an all-differential convolutional layer and an all-differential active layer, and the active function in the all-differential active layer adopts a linear active function; and arranging transmission layers among the full-difference network modules, wherein each transmission layer comprises a normalization layer, a transmission activation layer and an average pooling layer.
5. The airport foreign object recognition apparatus of claim 4, wherein the detection result acquisition module comprises:
the image to be recognized acquiring unit is used for preprocessing the detection image to obtain an image to be recognized;
the classification confidence map acquisition unit is used for inputting the image to be recognized into a full-difference pyramid feature network recognition model for recognition to acquire a classification confidence map;
and the detection result acquisition unit is used for acquiring a detection result according to the classification confidence map, wherein the detection result comprises the existence of foreign matters in the detection image and the absence of foreign matters in the detection image.
6. A computer arrangement comprising a memory, a processor and a computer program stored in said memory and executable on said processor, characterized in that said processor when executing said computer program carries out the steps of the airport foreign object identification method according to any of the claims 1 to 3.
7. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method for identifying airport foreign bodies according to any one of claims 1 to 3.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810574129.1A CN108764202B (en) | 2018-06-06 | 2018-06-06 | Airport foreign matter identification method and device, computer equipment and storage medium |
PCT/CN2018/092614 WO2019232831A1 (en) | 2018-06-06 | 2018-06-25 | Method and device for recognizing foreign object debris at airport, computer apparatus, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810574129.1A CN108764202B (en) | 2018-06-06 | 2018-06-06 | Airport foreign matter identification method and device, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108764202A CN108764202A (en) | 2018-11-06 |
CN108764202B true CN108764202B (en) | 2023-04-18 |
Family
ID=63999142
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810574129.1A Active CN108764202B (en) | 2018-06-06 | 2018-06-06 | Airport foreign matter identification method and device, computer equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108764202B (en) |
WO (1) | WO2019232831A1 (en) |
Families Citing this family (52)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111160074A (en) * | 2018-11-08 | 2020-05-15 | 欧菲影像技术(广州)有限公司 | Method and device for identifying foreign matters in equipment, computer equipment and storage medium |
CN109218233B (en) * | 2018-11-14 | 2021-07-20 | 国家电网有限公司 | OFDM channel estimation method based on depth feature fusion network |
CN109583502B (en) * | 2018-11-30 | 2022-11-18 | 天津师范大学 | Pedestrian re-identification method based on anti-erasure attention mechanism |
CN109766884A (en) * | 2018-12-26 | 2019-05-17 | 哈尔滨工程大学 | A kind of airfield runway foreign matter detecting method based on Faster-RCNN |
CN109765557B (en) * | 2018-12-30 | 2021-05-04 | 上海微波技术研究所(中国电子科技集团公司第五十研究所) | FOD target self-adaptive rapid classification identification method, system and medium based on distribution characteristics |
CN109829501B (en) * | 2019-02-01 | 2021-02-19 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN109919869B (en) * | 2019-02-28 | 2021-06-04 | 腾讯科技(深圳)有限公司 | Image enhancement method and device and storage medium |
CN111862105A (en) * | 2019-04-29 | 2020-10-30 | 北京字节跳动网络技术有限公司 | Image area processing method and device and electronic equipment |
WO2020223963A1 (en) * | 2019-05-09 | 2020-11-12 | Boe Technology Group Co., Ltd. | Computer-implemented method of detecting foreign object on background object in image, apparatus for detecting foreign object on background object in image, and computer-program product |
CN111178200B (en) * | 2019-12-20 | 2023-06-02 | 海南车智易通信息技术有限公司 | Method for identifying instrument panel indicator lamp and computing equipment |
CN111259763B (en) * | 2020-01-13 | 2024-02-02 | 华雁智能科技(集团)股份有限公司 | Target detection method, target detection device, electronic equipment and readable storage medium |
CN111539443B (en) * | 2020-01-22 | 2024-02-09 | 北京小米松果电子有限公司 | Image recognition model training method and device and storage medium |
CN111310645B (en) * | 2020-02-12 | 2023-06-13 | 上海东普信息科技有限公司 | Method, device, equipment and storage medium for warning overflow bin of goods accumulation |
CN111340796B (en) * | 2020-03-10 | 2023-07-21 | 创新奇智(成都)科技有限公司 | Defect detection method and device, electronic equipment and storage medium |
CN111462059B (en) * | 2020-03-24 | 2023-09-29 | 湖南大学 | Parallel processing method and device for intelligent target detection of fetal ultrasonic image |
CN113449554B (en) * | 2020-03-25 | 2024-03-08 | 北京灵汐科技有限公司 | Target detection and identification method and system |
CN111340137A (en) * | 2020-03-26 | 2020-06-26 | 上海眼控科技股份有限公司 | Image recognition method, device and storage medium |
CN111553277B (en) * | 2020-04-28 | 2022-04-26 | 电子科技大学 | Chinese signature identification method and terminal introducing consistency constraint |
CN111680609B (en) * | 2020-06-03 | 2023-02-07 | 合肥中科类脑智能技术有限公司 | Foreign matter identification system and method based on image registration and target detection |
CN111897025A (en) * | 2020-08-06 | 2020-11-06 | 航泰众联(北京)科技有限公司 | Airport pavement foreign matter detection equipment and system based on 3D/2D integration detection |
CN112037197A (en) * | 2020-08-31 | 2020-12-04 | 中冶赛迪重庆信息技术有限公司 | Hot-rolled bar cold-shearing material accumulation detection method, system and medium |
CN112308073B (en) * | 2020-11-06 | 2023-08-25 | 中冶赛迪信息技术(重庆)有限公司 | Method, system, equipment and medium for identifying loading and unloading and transferring states of scrap steel train |
CN112396005A (en) * | 2020-11-23 | 2021-02-23 | 平安科技(深圳)有限公司 | Biological characteristic image recognition method and device, electronic equipment and readable storage medium |
CN112668423B (en) * | 2020-12-18 | 2024-05-28 | 平安科技(深圳)有限公司 | Corridor sundry detection method and device, terminal equipment and storage medium |
CN112580509B (en) * | 2020-12-18 | 2022-04-15 | 中国民用航空总局第二研究所 | Logical reasoning type road surface detection method and system |
CN112686172B (en) * | 2020-12-31 | 2023-06-13 | 上海微波技术研究所(中国电子科技集团公司第五十研究所) | Airport runway foreign matter detection method, device and storage medium |
CN113159089A (en) * | 2021-01-18 | 2021-07-23 | 安徽建筑大学 | Pavement damage identification method, system, computer equipment and storage medium |
CN112766151B (en) * | 2021-01-19 | 2022-07-12 | 北京深睿博联科技有限责任公司 | Binocular target detection method and system for blind guiding glasses |
CN112949474B (en) * | 2021-02-26 | 2023-01-31 | 山东鹰格信息工程有限公司 | Airport FOD monitoring method, equipment, storage medium and device |
CN113111703B (en) * | 2021-03-02 | 2023-07-28 | 郑州大学 | Airport pavement disease foreign matter detection method based on fusion of multiple convolutional neural networks |
CN113096080B (en) * | 2021-03-30 | 2024-01-16 | 四川大学华西第二医院 | Image analysis method and system |
CN113111810B (en) * | 2021-04-20 | 2023-12-08 | 北京嘀嘀无限科技发展有限公司 | Target identification method and system |
CN113177459A (en) * | 2021-04-25 | 2021-07-27 | 云赛智联股份有限公司 | Intelligent video analysis method and system for intelligent airport service |
CN113177922A (en) * | 2021-05-06 | 2021-07-27 | 中冶赛迪重庆信息技术有限公司 | Raw material foreign matter identification method, system, medium and electronic terminal |
CN113393401B (en) * | 2021-06-24 | 2023-09-05 | 上海科技大学 | Object detection hardware accelerator, system, method, apparatus and medium |
CN113344900B (en) * | 2021-06-25 | 2023-04-18 | 北京市商汤科技开发有限公司 | Airport runway intrusion detection method, airport runway intrusion detection device, storage medium and electronic device |
CN113780074A (en) * | 2021-08-04 | 2021-12-10 | 五邑大学 | Method and device for detecting quality of wrapping paper and storage medium |
CN113658135B (en) * | 2021-08-17 | 2024-02-02 | 中国矿业大学 | Fuzzy PID-based self-adaptive dimming belt foreign matter detection method and system |
CN113807227B (en) * | 2021-09-11 | 2023-07-25 | 浙江浙能嘉华发电有限公司 | Safety monitoring method, device, equipment and storage medium based on image recognition |
CN114155746B (en) * | 2021-12-01 | 2022-09-13 | 南京莱斯电子设备有限公司 | FOD alarm accuracy rate and FOD alarm false alarm rate calculation method |
WO2023118937A1 (en) * | 2021-12-20 | 2023-06-29 | Sensetime International Pte. Ltd. | Object recognition method, apparatus, device and storage medium |
CN114708213A (en) * | 2022-03-28 | 2022-07-05 | 龙岩烟草工业有限责任公司 | Tobacco shred sundry detection method and device, computer equipment and storage medium |
CN114821194B (en) * | 2022-05-30 | 2023-07-25 | 深圳市科荣软件股份有限公司 | Equipment running state identification method and device |
CN115147770B (en) * | 2022-08-30 | 2022-12-02 | 山东千颐科技有限公司 | Belt foreign matter vision recognition system based on image processing |
CN116269378B (en) * | 2023-01-09 | 2023-11-17 | 西安电子科技大学 | Psychological health state detection device based on skin nicotinic acid response video analysis |
CN116704446B (en) * | 2023-08-04 | 2023-10-24 | 武汉工程大学 | Real-time detection method and system for foreign matters on airport runway pavement |
CN117523318B (en) * | 2023-12-26 | 2024-04-16 | 宁波微科光电股份有限公司 | Anti-light interference subway shielding door foreign matter detection method, device and medium |
CN117690164B (en) * | 2024-01-30 | 2024-04-30 | 成都欣纳科技有限公司 | Airport bird identification and driving method and system based on edge calculation |
CN117726882A (en) * | 2024-02-07 | 2024-03-19 | 杭州宇泛智能科技有限公司 | Tower crane object identification method, system and electronic equipment |
CN117994753B (en) * | 2024-04-03 | 2024-06-07 | 浙江浙能数字科技有限公司 | Vision-based device and method for detecting abnormality of entrance track of car dumper |
CN118628839A (en) * | 2024-08-09 | 2024-09-10 | 南昌航空大学 | Compact shelf passage foreign matter detection method based on YOLO-FOD |
CN118624635A (en) * | 2024-08-09 | 2024-09-10 | 克拉玛依红果实生物制品有限公司 | Edible oil production quality control system and method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017107188A1 (en) * | 2015-12-25 | 2017-06-29 | 中国科学院深圳先进技术研究院 | Method and apparatus for rapidly recognizing video classification |
CN107423690A (en) * | 2017-06-26 | 2017-12-01 | 广东工业大学 | A kind of face identification method and device |
CN107545263A (en) * | 2017-08-02 | 2018-01-05 | 清华大学 | A kind of object detecting method and device |
CN107562900A (en) * | 2017-09-07 | 2018-01-09 | 广州辰创科技发展有限公司 | Method and system for analyzing airfield runway foreign matter based on big data mode |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105160362B (en) * | 2015-10-22 | 2018-10-09 | 中国民用航空总局第二研究所 | A kind of runway FOD image detection method and devices |
CN105740910A (en) * | 2016-02-02 | 2016-07-06 | 北京格灵深瞳信息技术有限公司 | Vehicle object detection method and device |
CN107766864B (en) * | 2016-08-23 | 2022-02-01 | 斑马智行网络(香港)有限公司 | Method and device for extracting features and method and device for object recognition |
EP3151164A3 (en) * | 2016-12-26 | 2017-04-12 | Argosai Teknoloji Anonim Sirketi | A method for foreign object debris detection |
CN107481233A (en) * | 2017-08-22 | 2017-12-15 | 广州辰创科技发展有限公司 | A kind of image-recognizing method being applied in FOD foreign bodies detection radars |
-
2018
- 2018-06-06 CN CN201810574129.1A patent/CN108764202B/en active Active
- 2018-06-25 WO PCT/CN2018/092614 patent/WO2019232831A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017107188A1 (en) * | 2015-12-25 | 2017-06-29 | 中国科学院深圳先进技术研究院 | Method and apparatus for rapidly recognizing video classification |
CN107423690A (en) * | 2017-06-26 | 2017-12-01 | 广东工业大学 | A kind of face identification method and device |
CN107545263A (en) * | 2017-08-02 | 2018-01-05 | 清华大学 | A kind of object detecting method and device |
CN107562900A (en) * | 2017-09-07 | 2018-01-09 | 广州辰创科技发展有限公司 | Method and system for analyzing airfield runway foreign matter based on big data mode |
Non-Patent Citations (1)
Title |
---|
Ivan Kreso等.Ladder-style DenseNets for Semantic Segmentation of Large Natural Images.《IEEE》.2018,全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN108764202A (en) | 2018-11-06 |
WO2019232831A1 (en) | 2019-12-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108764202B (en) | Airport foreign matter identification method and device, computer equipment and storage medium | |
CN109086656B (en) | Airport foreign matter detection method, device, computer equipment and storage medium | |
KR102030628B1 (en) | Recognizing method and system of vehicle license plate based convolutional neural network | |
US11847775B2 (en) | Automated machine vision-based defect detection | |
US11144889B2 (en) | Automatic assessment of damage and repair costs in vehicles | |
Yi et al. | An end‐to‐end steel strip surface defects recognition system based on convolutional neural networks | |
CN107563372B (en) | License plate positioning method based on deep learning SSD frame | |
Anagnostopoulos et al. | A license plate-recognition algorithm for intelligent transportation system applications | |
Maurya et al. | Road extraction using k-means clustering and morphological operations | |
WO2019157288A1 (en) | Systems and methods for physical object analysis | |
CN111080628A (en) | Image tampering detection method and device, computer equipment and storage medium | |
CN110781836A (en) | Human body recognition method and device, computer equipment and storage medium | |
Neto et al. | Brazilian vehicle identification using a new embedded plate recognition system | |
EP3915042B1 (en) | Tyre sidewall imaging method | |
WO2019154383A1 (en) | Tool detection method and device | |
CN109002755B (en) | Age estimation model construction method and estimation method based on face image | |
CN105160355B (en) | A kind of method for detecting change of remote sensing image based on region correlation and vision word | |
CN112686248B (en) | Certificate increase and decrease type detection method and device, readable storage medium and terminal | |
Do et al. | Automatic license plate recognition using mobile device | |
CN116485779B (en) | Adaptive wafer defect detection method and device, electronic equipment and storage medium | |
Sulehria et al. | Vehicle number plate recognition using mathematical morphology and neural networks | |
CN112784712B (en) | Missing child early warning implementation method and device based on real-time monitoring | |
CN115239672A (en) | Defect detection method and device, equipment and storage medium | |
CN114821484A (en) | Airport runway FOD image detection method, system and storage medium | |
CN114842478A (en) | Text area identification method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |