CN112329771A - Building material sample identification method based on deep learning - Google Patents
Building material sample identification method based on deep learning Download PDFInfo
- Publication number
- CN112329771A CN112329771A CN202011201983.7A CN202011201983A CN112329771A CN 112329771 A CN112329771 A CN 112329771A CN 202011201983 A CN202011201983 A CN 202011201983A CN 112329771 A CN112329771 A CN 112329771A
- Authority
- CN
- China
- Prior art keywords
- building material
- roi
- material sample
- image
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 239000004566 building material Substances 0.000 title claims abstract description 43
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000013135 deep learning Methods 0.000 title claims abstract description 13
- 238000012549 training Methods 0.000 claims abstract description 30
- 102100025444 Gamma-butyrobetaine dioxygenase Human genes 0.000 claims abstract description 10
- 101000934612 Homo sapiens Gamma-butyrobetaine dioxygenase Proteins 0.000 claims abstract description 10
- 230000004927 fusion Effects 0.000 claims abstract description 8
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 6
- 238000000605 extraction Methods 0.000 claims abstract description 5
- 238000001514 detection method Methods 0.000 claims abstract description 4
- 230000002708 enhancing effect Effects 0.000 claims abstract description 4
- 238000004519 manufacturing process Methods 0.000 claims abstract description 4
- 239000000463 material Substances 0.000 claims abstract description 4
- 238000012545 processing Methods 0.000 claims abstract description 4
- OLBCVFGFOZPWHH-UHFFFAOYSA-N propofol Chemical compound CC(C)C1=CC=CC(C(C)C)=C1O OLBCVFGFOZPWHH-UHFFFAOYSA-N 0.000 claims abstract description 3
- 229960004134 propofol Drugs 0.000 claims abstract description 3
- 239000000284 extract Substances 0.000 claims abstract 2
- 230000006870 function Effects 0.000 claims description 9
- 238000010586 diagram Methods 0.000 claims description 7
- 238000012360 testing method Methods 0.000 claims description 6
- 102100039216 Dolichyl-diphosphooligosaccharide-protein glycosyltransferase subunit 2 Human genes 0.000 claims description 3
- 101000612655 Homo sapiens 26S proteasome non-ATPase regulatory subunit 1 Proteins 0.000 claims description 3
- 101000670093 Homo sapiens Dolichyl-diphosphooligosaccharide-protein glycosyltransferase subunit 2 Proteins 0.000 claims description 3
- 238000012217 deletion Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 238000010276 construction Methods 0.000 abstract description 3
- 238000013507 mapping Methods 0.000 abstract 1
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000004035 construction material Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a building material sample identification method based on deep learning, which comprises a model training stage and a sample identification stage, wherein the model training stage comprises the steps of manufacturing a building material sample data set and constructing a multi-scale information fusion convolutional neural network for sample identification; enhancing the data of the sample data set to obtain the optimal model performance; and the sample identification stage comprises the steps of inputting the processed building material sample image into a model, performing feature extraction to generate an optimal size feature map, correcting the generated feature map to be an ROI, transmitting the ROI into a ROI posing layer according to different scales, mapping to a same size propofol, projecting on the original building material sample image to generate a proseal feature map, performing BBOX and CLS branch processing and the like to generate a building material detection frame with an accurate position, and identifying the material performance state of the sample. The method extracts information through the multi-scale feature map, well learns the target feature information of different scales, has good identification performance and universality, and has wide application prospect in the field of construction engineering.
Description
Technical Field
The invention belongs to the field of construction engineering, and particularly relates to a building material sample identification method based on deep learning.
Background
With the increase of data information processing requirements and the rapid development of artificial intelligence technologies, people are trying to identify building material samples by using any method based on machine learning or deep learning, such as shallow network algorithms like a clustering neural network, a support vector machine, a wavelet transform neural network, and the like. However, the shallow network algorithm needs various complex algorithms to extract and determine the sample identification feature information from the echo information; the computational complexity and the consumption of computational resources are high, and therefore the versatility is low. Because a convolutional neural network is one of important models in the deep learning field and has a network structure that is highly invariant to image data having characteristics such as translation, inversion, and affine transformation, the convolutional neural network has been widely used in various fields in computer vision in recent years and has achieved excellent results. However, in the identification process, the conventional single linear convolution neural network only outputs for the last layer, namely: information extraction is carried out on the feature map with a single scale, and target feature information with different scales cannot be well learned obviously; it is therefore difficult to achieve good identification performance in complex construction material scenarios.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a building material sample identification method based on deep learning.
The purpose of the invention is realized by the following technical scheme:
a building material sample identification method based on deep learning comprises a model training stage and a sample identification stage,
the model training phase comprises the following steps:
s1, collecting and labeling building material samples, and manufacturing a building material sample data set; the data set comprises a building material sample of each category and is divided into a training set, a testing set and an evaluation set;
s2, constructing a multi-scale information fusion convolutional neural network for building material sample identification;
s3, performing data enhancement on the sample data set in the S1: to obtain optimal model performance;
the sample identification phase comprises the following steps:
s1, inputting the processed building material sample image into a model, pre-training the model through imagenet, and removing Resnet at the top layer to extract the characteristics and generate a characteristic diagram with the optimal size;
s2, respectively passing the feature map generated in S1 through RPN2, 3, 4 and 5 to generate candidate anchors with different sizes; setting the area of the anchor, wherein the anchors generated by all RPNs are uniformly distributed according to the following formula of 1: 1. 1: 2. 2: 1, generating a plurality of candidate anchors, and screening the anchor with the most complete target by the RPN according to the real mark by utilizing a binary classification and bounding box regerssion function and correcting the anchor to be used as an ROI;
s3, using the characteristic image layers output by different residual convolution modules as the input of an ROI posing layer in the ROI with different scales, using the output characteristic image of a deep convolution module in the ROI with large scale, and using ROI leave as the discrimination standard output by the corresponding layer convolution module:
s4, transferring the ROI generated in the S3 into an ROI posing layer, wherein the ROI posing uniformly maps the multi-scale ROI into a prosal with the same size, and the prosal is projected on an original building material sample image to generate a prosal feature map, so that subsequent BBOX and CLS branches can be conveniently processed;
s5, calculating the class of each sample through the full connection layer and softmax for the generic feature map by the CLS branch, and outputting the highest class probability as the confidence coefficient;
s6, the BBOX branch utilizes a bounding box regression function to correct the propofol area, a building material detection frame with a more accurate position is generated, and the material performance state of the building material sample is identified.
Preferably, the model training phase S3 includes the following steps:
s31, building material sample images under different scales and scenes are built by combining a plurality of data enhancement methods, and the existing data are expanded to simulate complex recognition scenes, so that the learning of the model to detail characteristic information is improved, and the universality of the model is enhanced;
s32, setting an initial weight as a pre-trained weight on imagenet, setting an initial learning rate to be 0.001, a learning rate attenuation index to be 0.1 and a batch _ size to be 16, and inputting the image size;
s33, in a loss function, an RPN series module adopts two classification losses and regression losses; CLS branch adopts multi-classification loss, BBOX branch adopts regression loss;
and S34, training on a training set and a test set by adopting an SGD random gradient descent optimizer in training until the performance of the model is optimal.
Preferably, the method for enhancing data in S31 includes:
S311、Random Erasing:
(3) IRE, randomly selecting an occlusion position on the whole target image;
(4) ORE, randomly selecting an occlusion position in a bounding-box area of the target;
(3) combining both IRE and ORE;
S312、Hide and Seek:
the method comprises the steps that a picture is cut into S-S grids, each grid is hidden according to probability, and different grid groups are hidden in each batch of the same picture in training;
S313、Grid Mask:
in order to avoid the problem that the complete target is deleted or the context information is lost due to the existence of an over-deletion area in S31 and S32 1; setting four parameters of x, y, r and d through Grid Mask:
wherein r is the size of mask, M is the number of reserved pixels, H, W is the image size; x and y are area coordinates randomly generated on the image; the value of the non-shielded area is 1, the value of the shielded area is 0, a mask with the same resolution as the original image is generated, and then the mask and the original image are multiplied to obtain an image;
S314、Mixup
performing mixed enhancement on the images, and mixing the images among different classes; the algorithm can be summarized as follows:
wherein x1、x2Is the pixel of different images, and lambda is the weight;
S315、Cutmix
a portion of the area is randomly cropped and the area pixel values of the other data in the training set are randomly filled.
Preferably, the step of generating the optimal feature map after the sample identification stage S1 performs sample extraction includes:
s11, marking the last residual convolution modules in Resnet as { C1,C2,C3,…CnAnd extracting output characteristic graphs of the residual error modules respectively and marking as { P }1,P2,P3,…,Pn};
S12, matching characteristic diagram P of the deepest layer52 times of nearest neighbor upsampling is carried out through 1;
s13, extracting CnAdjacent residual convolution module Cn-1Is output characteristic map Pn-11 is carried out1, convolution dimensionality reduction processing;
s14, adding the pixel values of the corresponding parts to the characteristic map PnCarrying out fusion;
s15, reducing aliasing effect brought by upsampling by carrying out 3-by-3 convolution on the fused feature map;
and S16, iterating the process until the optimal size characteristic diagram is generated.
The invention has the beneficial effects that: the method provided by the invention can be used for extracting information through the multi-scale feature map, well learning the target feature information with different scales, has good identification performance and universality, and has wide application prospects in the field of construction engineering.
Detailed Description
The technical scheme of the invention is specifically described by combining the embodiment, and the invention discloses a building material sample identification method based on deep learning, which comprises a model training stage and a sample identification stage, wherein the model training stage comprises the following steps:
s1, collecting and labeling building material samples, and manufacturing a building material sample data set; the data set comprises a building material sample of each category and is divided into a training set, a testing set and an evaluation set;
s2, constructing a multi-scale information fusion convolutional neural network for building material sample identification;
s3, performing data enhancement on the sample data set in the S1 to obtain the best model performance;
in particular, the method comprises the following steps of,
s31, building material sample images under different scales and scenes are built by combining a plurality of data enhancement methods, and the existing data are expanded to simulate complex recognition scenes, so that the learning of the model to detail characteristic information is improved, and the universality of the model is enhanced;
s32, setting the initial weight as the pre-trained weight on imagenet, the initial learning rate being 0.001, the learning rate decay index being 0.1, the batch _ size being 16, and the input image size being 224 × 224.
S33, in a loss function, an RPN series module adopts two classification losses and regression losses; the CLS branch employs multi-classification loss, and the BBOX branch employs regression loss.
And S34, training 20 epochs on a training set and a testing set by adopting an SGD random gradient descent optimizer in the training until the model performance reaches the best.
Wherein the enhancing method in S31 comprises the following steps:
S311、Random Erasing:
(5) IRE, randomly selecting an occlusion position on the whole target image;
(6) ORE, randomly selecting an occlusion position in a bounding-box area of the target;
(3) combining both IRE and ORE;
S312、Hide and Seek:
the picture is cut into S-S grids, each grid is hidden by adopting a certain probability, and different grid groups are hidden in each batch of the same picture in training;
S313、Grid Mask:
in order to avoid the problem that the complete target is deleted or the context information is lost due to the existence of an over-deletion area in S31 and S32 1; setting four parameters of x, y, r and d through Grid Mask:
wherein r is the size of mask, M is the number of reserved pixels, H, W is the image size; x and y are area coordinates randomly generated on the image; the value of the non-shielded area is 1, the value of the shielded area is 0, a mask with the same resolution as the original image is generated, and then the mask and the original image are multiplied to obtain an image;
S314、Mixup
performing mixed enhancement on the images, and mixing the images among different classes; the algorithm can be summarized as follows:
wherein x1、x2λ is the weight for the pixels of the different images.
S315、Cutmix
A portion of the area is randomly cropped and the area pixel values of the other data in the training set are randomly filled.
The sample identification phase comprises the following steps:
and S1, inputting the processed building material sample image into the model, and performing feature extraction through Resnet which is pre-trained on imagenet and goes to the top layer.
S2, marking the last 5 residual convolution modules in Resnet as { C1,C2,C3,C4,C5Extracting output characteristic graphs of the 5 residual modules respectively and marking the output characteristic graphs as { P }1,P2,P3,P4,P5}; generating a feature map with an optimal size;
the generation of the feature map comprises the following steps:
s21, matching characteristic diagram P of the deepest layer52 times of nearest neighbor upsampling is carried out through 1;
s22, extracting the output characteristic graph P of the residual convolution module C4 adjacent to the C54Performing 1 × 1 convolution dimensionality reduction processing;
s23, adding the pixel values of the corresponding parts to the characteristic map P5Carrying out fusion in a fusion mode;
s24, reducing aliasing effect brought by upsampling by carrying out 3-by-3 convolution on the fused feature map;
s25, iterating the processes until a feature map with the optimal size is generated;
s3, respectively passing the feature maps generated in the steps through RPN2, 3, 4 and 5 to generate candidate anchors with different sizes; the anchors areas are set to 32 × 32, 64 × 64, 128 × 128, 256 × 256, respectively, and all RPN-generated anchors collectively adopt 1: 1. 1: 2. 2: 1, generating a plurality of candidate anchors, and screening the anchor with the most complete target by the RPN according to the real mark by utilizing a binary classification and bounding box regerssion function and correcting the anchor to be used as an ROI;
s4, using the characteristic image layers output by different residual convolution modules as the input of an ROI posing layer in the ROI with different scales, using the output characteristic image of a deep convolution module in the ROI with large scale, and using ROI leave as the discrimination standard output by a certain layer of convolution module:wherein w and h are the length and width of ROI, and K0For the reference leave, the small-scale ROI is set to 5 by the output feature map of the deep shallow convolution module, and represents the feature map P5And (4) size.
S5, transferring the ROI generated in the S4 into an ROI posing layer, wherein the ROI posing uniformly maps the multi-scale ROI into a proposal with the size of 7 x 7, and the proposal feature map is generated by projecting the multi-scale ROI on an original building material sample image, so that subsequent BBOX and CLS branches can be conveniently processed;
s6, calculating the class of each sample through the full connection layer and softmax for the generic feature map by the CLS branch, and outputting the highest class probability as the confidence coefficient;
s7, the BBOX branch utilizes a bounding box regression function to correct the prosal area, a building material detection frame with a more accurate position is generated, and the material performance state of the building material sample is identified.
There are, of course, many other specific embodiments of the invention and these are not to be considered as limiting. All technical solutions formed by using equivalent substitutions or equivalent transformations fall within the scope of the claimed invention.
Claims (4)
1. A building material sample identification method based on deep learning is characterized in that: comprises a model training stage and a sample identification stage,
the model training phase comprises the following steps:
s1, collecting and labeling building material samples, and manufacturing a building material sample data set; the data set comprises a building material sample of each category and is divided into a training set, a testing set and an evaluation set;
s2, constructing a multi-scale information fusion convolutional neural network for building material sample identification;
s3, performing data enhancement on the sample data set in the S1: to obtain optimal model performance;
the sample identification phase comprises the following steps:
s1, inputting the processed building material sample image into a model, pre-training the model through imagenet, and removing Resnet at the top layer to extract the characteristics and generate a characteristic diagram with the optimal size;
s2, respectively passing the feature map generated in S1 through RPN2, 3, 4 and 5 to generate candidate anchors with different sizes; setting the area of the anchor, wherein the anchors generated by all RPNs are uniformly distributed according to the following formula of 1: 1. 1: 2. 2: 1, generating a plurality of candidate anchors, and screening the anchor with the most complete target by the RPN according to the real mark by utilizing a binary classification and bounding box regerssion function and correcting the anchor to be used as an ROI;
s3, using the characteristic image layers output by different residual convolution modules as the input of an ROI posing layer in the ROI with different scales, using the output characteristic image of a deep convolution module in the ROI with large scale, and using ROI leave as the discrimination standard output by the corresponding layer convolution module:
s4, transferring the ROI generated in the S3 into an ROI posing layer, wherein the ROI posing uniformly maps the multi-scale ROI into a prosal with the same size, and the prosal is projected on an original building material sample image to generate a prosal feature map, so that subsequent BBOX and CLS branches can be conveniently processed;
s5, calculating the class of each sample through the full connection layer and softmax for the generic feature map by the CLS branch, and outputting the highest class probability as the confidence coefficient;
s6, the BBOX branch utilizes a bounding box regression function to correct the propofol area, a building material detection frame with a more accurate position is generated, and the material performance state of the building material sample is identified.
2. The building material sample identification method based on deep learning of claim 1, wherein: the model training phase S3 includes the following steps:
s31, building material sample images under different scales and scenes are built by combining a plurality of data enhancement methods, and the existing data are expanded to simulate complex recognition scenes, so that the learning of the model to detail characteristic information is improved, and the universality of the model is enhanced;
s32, setting an initial weight as a pre-trained weight on imagenet, setting an initial learning rate to be 0.001, a learning rate attenuation index to be 0.1 and a batch _ size to be 16, and inputting the image size;
s33, in a loss function, an RPN series module adopts two classification losses and regression losses; CLS branch adopts multi-classification loss, BBOX branch adopts regression loss;
and S34, training on a training set and a test set by adopting an SGD random gradient descent optimizer in training until the performance of the model is optimal.
3. The building material sample identification method based on deep learning of claim 2, wherein: the method for enhancing data in S31 includes:
S311、Random Erasing:
(1) IRE, randomly selecting an occlusion position on the whole target image;
(2) ORE, randomly selecting an occlusion position in a bounding-box area of the target;
(3) combining both IRE and ORE;
S312、Hide and Seek:
the method comprises the steps that a picture is cut into S-S grids, each grid is hidden according to probability, and different grid groups are hidden in each batch of the same picture in training;
S313、Grid Mask:
in order to avoid the problem that the complete target is deleted or the context information is lost due to the existence of an over-deletion area in S31 and S32 1; setting four parameters of x, y, r and d through Grid Mask:
wherein r is the size of mask, M is the number of reserved pixels, H, W is the image size; x and y are area coordinates randomly generated on the image; the value of the non-shielded area is 1, the value of the shielded area is 0, a mask with the same resolution as the original image is generated, and then the mask and the original image are multiplied to obtain an image;
S314、Mixup
performing mixed enhancement on the images, and mixing the images among different classes; the algorithm can be summarized as follows:
wherein x1、x2Is the pixel of different images, and lambda is the weight;
S315、Cutmix
a portion of the area is randomly cropped and the area pixel values of the other data in the training set are randomly filled.
4. The building material sample identification method based on deep learning of claim 1, wherein: the step of generating the optimal feature map after the sample identification stage S1 performs sample extraction includes:
s11, marking the last residual convolution modules in Resnet as { C1,C2,C3,…CnGet rid of, extract the module of residual error separatelyIs marked as { P1,P2,P3,…,Pn};
S12, matching characteristic diagram P of the deepest layer52 times of nearest neighbor upsampling is carried out through 1;
s13, extracting CnAdjacent residual convolution module Cn-1Is output characteristic map Pn-1Performing 1 × 1 convolution dimensionality reduction processing;
s14, adding the pixel values of the corresponding parts to the characteristic map PnCarrying out fusion;
s15, reducing aliasing effect brought by upsampling by carrying out 3-by-3 convolution on the fused feature map;
and S16, iterating the process until the optimal size characteristic diagram is generated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011201983.7A CN112329771B (en) | 2020-11-02 | 2020-11-02 | Deep learning-based building material sample identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011201983.7A CN112329771B (en) | 2020-11-02 | 2020-11-02 | Deep learning-based building material sample identification method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112329771A true CN112329771A (en) | 2021-02-05 |
CN112329771B CN112329771B (en) | 2024-05-14 |
Family
ID=74323985
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011201983.7A Active CN112329771B (en) | 2020-11-02 | 2020-11-02 | Deep learning-based building material sample identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112329771B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112967296A (en) * | 2021-03-10 | 2021-06-15 | 重庆理工大学 | Point cloud dynamic region graph convolution method, classification method and segmentation method |
CN113657202A (en) * | 2021-07-28 | 2021-11-16 | 万翼科技有限公司 | Component identification method, training set construction method, device, equipment and storage medium |
CN113762229A (en) * | 2021-11-10 | 2021-12-07 | 山东天亚达新材料科技有限公司 | Intelligent identification method and system for building equipment in building site |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10303981B1 (en) * | 2018-10-04 | 2019-05-28 | StradVision, Inc. | Learning method and testing method for R-CNN based object detector, and learning device and testing device using the same |
CN110533024A (en) * | 2019-07-10 | 2019-12-03 | 杭州电子科技大学 | Biquadratic pond fine granularity image classification method based on multiple dimensioned ROI feature |
CN111160249A (en) * | 2019-12-30 | 2020-05-15 | 西北工业大学深圳研究院 | Multi-class target detection method of optical remote sensing image based on cross-scale feature fusion |
-
2020
- 2020-11-02 CN CN202011201983.7A patent/CN112329771B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10303981B1 (en) * | 2018-10-04 | 2019-05-28 | StradVision, Inc. | Learning method and testing method for R-CNN based object detector, and learning device and testing device using the same |
CN110533024A (en) * | 2019-07-10 | 2019-12-03 | 杭州电子科技大学 | Biquadratic pond fine granularity image classification method based on multiple dimensioned ROI feature |
CN111160249A (en) * | 2019-12-30 | 2020-05-15 | 西北工业大学深圳研究院 | Multi-class target detection method of optical remote sensing image based on cross-scale feature fusion |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112967296A (en) * | 2021-03-10 | 2021-06-15 | 重庆理工大学 | Point cloud dynamic region graph convolution method, classification method and segmentation method |
CN113657202A (en) * | 2021-07-28 | 2021-11-16 | 万翼科技有限公司 | Component identification method, training set construction method, device, equipment and storage medium |
CN113762229A (en) * | 2021-11-10 | 2021-12-07 | 山东天亚达新材料科技有限公司 | Intelligent identification method and system for building equipment in building site |
CN113762229B (en) * | 2021-11-10 | 2022-02-08 | 山东天亚达新材料科技有限公司 | Intelligent identification method and system for building equipment in building site |
Also Published As
Publication number | Publication date |
---|---|
CN112329771B (en) | 2024-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109977918B (en) | Target detection positioning optimization method based on unsupervised domain adaptation | |
CN110443143B (en) | Multi-branch convolutional neural network fused remote sensing image scene classification method | |
CN108416266B (en) | Method for rapidly identifying video behaviors by extracting moving object through optical flow | |
CN109543606B (en) | Human face recognition method with attention mechanism | |
CN110738207B (en) | Character detection method for fusing character area edge information in character image | |
CN111369572B (en) | Weak supervision semantic segmentation method and device based on image restoration technology | |
Kadam et al. | Detection and localization of multiple image splicing using MobileNet V1 | |
CN105678284B (en) | A kind of fixed bit human body behavior analysis method | |
CN114758288B (en) | Power distribution network engineering safety control detection method and device | |
CN112926652B (en) | Fish fine granularity image recognition method based on deep learning | |
CN113011357A (en) | Depth fake face video positioning method based on space-time fusion | |
CN112329771B (en) | Deep learning-based building material sample identification method | |
CN110390308B (en) | Video behavior identification method based on space-time confrontation generation network | |
CN107784288A (en) | A kind of iteration positioning formula method for detecting human face based on deep neural network | |
CN108154133B (en) | Face portrait-photo recognition method based on asymmetric joint learning | |
CN105574545B (en) | The semantic cutting method of street environment image various visual angles and device | |
CN109635726A (en) | A kind of landslide identification method based on the symmetrical multiple dimensioned pond of depth network integration | |
CN112861970A (en) | Fine-grained image classification method based on feature fusion | |
CN115410081A (en) | Multi-scale aggregated cloud and cloud shadow identification method, system, equipment and storage medium | |
CN114626476A (en) | Bird fine-grained image recognition method and device based on Transformer and component feature fusion | |
CN112364974B (en) | YOLOv3 algorithm based on activation function improvement | |
CN115019039A (en) | Example segmentation method and system combining self-supervision and global information enhancement | |
CN116596966A (en) | Segmentation and tracking method based on attention and feature fusion | |
CN112668662B (en) | Outdoor mountain forest environment target detection method based on improved YOLOv3 network | |
CN113011506A (en) | Texture image classification method based on depth re-fractal spectrum network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |