CN114463637A - Winter wheat remote sensing identification analysis method and system based on deep learning - Google Patents
Winter wheat remote sensing identification analysis method and system based on deep learning Download PDFInfo
- Publication number
- CN114463637A CN114463637A CN202210117044.7A CN202210117044A CN114463637A CN 114463637 A CN114463637 A CN 114463637A CN 202210117044 A CN202210117044 A CN 202210117044A CN 114463637 A CN114463637 A CN 114463637A
- Authority
- CN
- China
- Prior art keywords
- area
- winter wheat
- data set
- semantic segmentation
- file
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 241000209140 Triticum Species 0.000 title claims abstract description 126
- 235000021307 Triticum Nutrition 0.000 title claims abstract description 126
- 238000013135 deep learning Methods 0.000 title claims abstract description 43
- 238000004458 analytical method Methods 0.000 title claims abstract description 25
- 230000012010 growth Effects 0.000 claims abstract description 117
- 230000011218 segmentation Effects 0.000 claims abstract description 64
- 238000000034 method Methods 0.000 claims abstract description 56
- 238000012549 training Methods 0.000 claims abstract description 56
- 238000000605 extraction Methods 0.000 claims abstract description 48
- 238000009826 distribution Methods 0.000 claims abstract description 42
- 238000012360 testing method Methods 0.000 claims abstract description 32
- 238000012795 verification Methods 0.000 claims abstract description 26
- 238000005520 cutting process Methods 0.000 claims abstract description 25
- 238000011160 research Methods 0.000 claims abstract description 25
- 238000013507 mapping Methods 0.000 claims abstract description 4
- 238000012545 processing Methods 0.000 claims description 38
- 238000004590 computer program Methods 0.000 claims description 15
- 238000011156 evaluation Methods 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 9
- 238000003860 storage Methods 0.000 claims description 9
- 239000002131 composite material Substances 0.000 claims description 8
- 238000010200 validation analysis Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000011176 pooling Methods 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 230000035558 fertility Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 101100498818 Arabidopsis thaliana DDR4 gene Proteins 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 230000008034 disappearance Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 2
- 239000010931 gold Substances 0.000 description 2
- 229910052737 gold Inorganic materials 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000010899 nucleation Methods 0.000 description 2
- 239000002904 solvent Substances 0.000 description 2
- 230000003698 anagen phase Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006116 polymerization reaction Methods 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30188—Vegetation; Agriculture
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a winter wheat remote sensing identification analysis method and system based on deep learning. The method comprises the following steps: creating a label vector file of a polygonal area, converting the label vector file into a raster file, generating square vector data, cutting the median synthetic images and the raster file of the polygonal area in five growth periods in batch by using the square vector data, and adjusting the size of the median synthetic images and the raster file to obtain a training data set, a verification data set and a test data set; training the semantic segmentation model by taking the training data set and the verification data set of each birth phase as input, and classifying the test set of each birth phase. And generating a spatial distribution map of the winter wheat in each growth period, and performing spatial mapping and area extraction on the winter wheat. The scheme provided by the invention is based on a semantic segmentation classification method, the overall precision is high, the classification effect is good, and the remote sensing identification precision of winter wheat in the heading stage is highest. The extraction precision of the winter wheat area in the jointing-elongation heading stage research area is high by the deep learning method.
Description
Technical Field
The invention belongs to the field of winter wheat remote sensing identification, and particularly relates to a winter wheat remote sensing identification analysis method and system based on deep learning.
Background
At present, researches are carried out to calculate the separability between the winter wheat in different growth stages and other land utilization coverage types by a Jeffries-Matusita (J-M) distance method, and finally determine that the sentinel-2 image in the heading stage is the best period for extracting the areas of the winter wheat in the northern and middle regions of Anhui province.
The remote sensing identification condition of winter wheat in each growth period in Shandong province is researched by combining Landsat-8 OLI and sentinel-2 data by utilizing a time polymerization technology, the data of the maturity period and the green turning period are determined to be more effective finally, and the remote sensing identification effect of the winter wheat is better.
The prior art has the following defects:
in the researches, only a random forest classification method is adopted to research the remote sensing identification condition of the winter wheat in the region, and different methods such as the influence of deep learning on the remote sensing identification condition of the winter wheat in each growth period are not further evaluated.
Disclosure of Invention
In order to solve the technical problems, the invention provides a technical scheme of a winter wheat remote sensing identification analysis method and system based on deep learning, and aims to solve the technical problems.
The invention discloses a winter wheat remote sensing identification analysis method based on deep learning, which comprises the following steps:
step S1, training, validation, and test data set creation: selecting a polygonal area required by a part of a research area, creating a label vector file of the polygonal area, converting the label vector file into a raster file, reclassifying to generate square vector data, cutting the sendienl-2 median synthetic images of five growth periods of the polygonal area and the raster file in batch by using the square vector data, and adjusting the size of the cut sendienl-2 median synthetic images and the raster file to obtain a training data set, a verification data set and a test data set; the obtained sentinel-2 median synthetic images of the five growth periods share one label;
step S2, cutting and processing the sentinel-2 median synthetic images of five growth periods in the whole region of the research area to obtain a spatial distribution data set;
s3, building a U-Net semantic segmentation model and setting parameters;
step S4, training a U-Net semantic segmentation model: training the U-Net semantic segmentation model by taking the training data set and the verification data set of each growth period as input to obtain the U-Net semantic segmentation model under the optimal weight of each growth period;
step S5, calling a U-Net semantic segmentation model under the optimal weight of each growth period, and classifying the test set of each growth period to obtain a classified result image;
s6, calling a U-Net semantic segmentation model under the optimal weight of each growth period, classifying the spatial distribution data sets of each growth period, and splicing classification results to generate a winter wheat spatial distribution map of each growth period;
s7, evaluating the classification precision of winter wheat by a U-Net semantic segmentation model;
step S8, drawing the space of winter wheat and extracting the area: selecting a winter wheat spatial distribution map with the highest classification precision in a growth period, counting the area of the winter wheat to obtain an extraction area, and applying the extraction area and a ground true value to perform precision evaluation on the extraction area.
According to the method of the first aspect of the present invention, in step S1, the specific method for creating the tag vector file of the polygon area includes:
and cutting a synthetic image of the whole growth cycle of the winter wheat by using the polygonal surface elements of the polygonal area, and establishing a label vector file of the polygonal area by referring to the synthetic images of five growth periods and field real measuring points of the winter wheat.
According to the method of the first aspect of the present invention, in step S1, the specific method for converting the tag vector file into a raster file and reclassifying the raster file to generate square vector data includes:
and converting the label vector file into a raster file, reclassifying the raster file into classes 0 and 1, randomly creating point elements in the polygonal area, then establishing a graphic buffer area for the point elements, generating square vector data with the size of 1280m, wherein the boundary of the square vector data is ensured to be completely in the raster.
In the method according to the first aspect of the present invention, in the step S1, the sentinel-2 median composite image includes:
red, green, blue and near infrared four bands;
adjusting the sizes of the sendienl-2 median synthetic image and the raster file obtained by cutting as follows: 128 pixels by 128 pixels.
In the method according to the first aspect of the present invention, in step S2, the specific method for obtaining the spatial distribution data set by cutting and processing the sentinel-2 median synthetic image of five growth periods of the whole area of the study region includes:
and cutting the sentinel-2 median composite image of five growth periods in the whole area in the study area into image blocks with the size of 512 pixels by 512 pixels, and removing the image blocks with all background values to obtain a spatial distribution data set.
According to the method of the first aspect of the present invention, in the step S7, the specific method for assessing the classification accuracy of winter wheat by using the U-Net semantic segmentation model includes:
and comparing the classified result image with the self-made label, and quantitatively evaluating the winter wheat semantic segmentation accuracy of the five growth period test set images by using the precision rate, the recall rate, the F1-score, the cross-over ratio and the precision rate.
According to the method of the first aspect of the present invention, in step S8, the specific formula for performing the accuracy evaluation on the extracted area by applying the extracted area and the ground true value includes:
in the formula, P represents the area extraction precision, S represents the winter wheat planting area extracted by the deep learning method, and S' represents the real winter wheat planting area on the ground.
The invention discloses a winter wheat remote sensing identification analysis system based on deep learning in a second aspect, which comprises:
the first processing module is configured to select a polygonal area required by a part of a study area, create a label vector file of the polygonal area, convert the label vector file into a raster file, reclassify the raster file, generate square vector data, batch-cut the sentienl-2 median synthetic images and the raster file of five growth periods of the polygonal area by using the square vector data, and adjust the size of the cut sentienl-2 median synthetic images and the raster file to obtain a training data set, a verification data set and a test data set; the obtained sentinel-2 median synthetic images of the five growth periods share one label;
the second processing module is configured to cut and process the sentinel-2 median synthetic images of five growth periods of the whole region of the research area to obtain a spatial distribution data set;
the third processing module is configured to build a U-Net semantic segmentation model and set parameters;
the fourth processing module is configured to train the U-Net semantic segmentation model by taking the training data set and the verification data set of each growth period as input to obtain the U-Net semantic segmentation model under the optimal weight of each growth period;
the fifth processing module is configured to call a U-Net semantic segmentation model under the optimal weight of each growth period, classify the test set of each growth period and obtain a classified result image;
the sixth processing module is configured to call a U-Net semantic segmentation model under the optimal weight of each growth period, classify the spatial distribution data sets of each growth period, and then splice classification results to generate a winter wheat spatial distribution map of each growth period;
the seventh processing module is configured to evaluate the classification precision of winter wheat of the U-Net semantic segmentation model;
and the eighth processing module is configured to select the winter wheat spatial distribution map in the growth period with the highest classification precision, count the area of the winter wheat to obtain an extraction area, and perform precision evaluation on the extraction area by applying the extraction area and the ground real value.
A third aspect of the invention discloses an electronic device. The electronic equipment comprises a memory and a processor, the memory stores a computer program, and the processor executes the computer program to realize the steps of the winter wheat remote sensing identification analysis method based on deep learning in any one of the first aspect of the disclosure.
A fourth aspect of the invention discloses a computer-readable storage medium. The computer readable storage medium stores thereon a computer program, and when the computer program is executed by a processor, the steps in a winter wheat remote sensing identification and analysis method based on deep learning in any one of the first aspect of the present disclosure are implemented.
The scheme provided by the invention is based on a U-Net semantic segmentation classification method, the overall precision is high, the classification effect is good, and the remote sensing identification precision of the winter wheat in the heading and heading stage is highest. The extraction precision of the winter wheat area in the jointing-elongation heading stage research area is high by the deep learning method.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a winter wheat remote sensing identification analysis method based on deep learning according to an embodiment of the invention;
fig. 2 is a winter wheat spatial distribution diagram of the main area of north yunnan of 2019-2020 generated by the deep learning method according to the embodiment of the present invention;
FIG. 3 is a structural diagram of a winter wheat remote sensing identification analysis system based on deep learning according to an embodiment of the invention;
fig. 4 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention discloses a winter wheat remote sensing identification analysis method based on deep learning. Fig. 1 is a flowchart of a remote sensing recognition analysis method for winter wheat based on deep learning according to an embodiment of the present invention, as shown in fig. 1, the method includes:
step S1, training, validation, and test data set creation: selecting a polygonal area required by a part of a research area, creating a label vector file of the polygonal area, converting the label vector file into a raster file, reclassifying to generate square vector data, cutting the sendienl-2 median synthetic images of five growth periods of the polygonal area and the raster file in batch by using the square vector data, and adjusting the size of the cut sendienl-2 median synthetic images and the raster file to obtain a training data set, a verification data set and a test data set; the obtained sentinel-2 median synthetic images of the five growth periods share one label;
specifically, in the above step, the obtained sentinel-2 median synthetic images of the five growth periods share a grid file obtained after cutting, namely the label;
step S2, cutting and processing the sentinel-2 median synthetic images of five growth periods in the whole region of the research area to obtain a spatial distribution data set;
s3, building a U-Net semantic segmentation model and setting parameters;
step S4, training a U-Net semantic segmentation model: training the U-Net semantic segmentation model by taking the training data set and the verification data set of each growth period as input to obtain the U-Net semantic segmentation model under the optimal weight of each growth period;
step S5, calling a U-Net semantic segmentation model under the optimal weight of each growth period, and classifying the test set of each growth period to obtain a classified result image;
s6, calling a U-Net semantic segmentation model under the optimal weight of each growth period, classifying the spatial distribution data sets of each growth period, and splicing classification results to generate a winter wheat spatial distribution map of each growth period;
s7, evaluating the classification precision of winter wheat by a U-Net semantic segmentation model;
step S8, spatial mapping and area extraction of winter wheat: selecting a winter wheat spatial distribution map with the highest classification precision in a growth period, counting the area of the winter wheat to obtain an extraction area, and applying the extraction area and a ground true value to perform precision evaluation on the extraction area.
In step S1, training, validation and production of test data sets: selecting a polygonal area required by a part of a research area, creating a label vector file of the polygonal area, converting the label vector file into a raster file, reclassifying to generate square vector data, cutting the sendienl-2 median synthetic images of five growth periods of the polygonal area and the raster file in batch by using the square vector data, and adjusting the size of the cut sendienl-2 median synthetic images and the raster file to obtain a training data set, a verification data set and a test data set; the five growth phase sentinel-2 median synthetic shadows share a grid of labels.
In some embodiments, in step S1, the specific method for creating the label vector file of the polygon area includes:
cutting a synthetic image of the winter wheat in the whole growth period by using the polygonal surface elements of the polygonal area, and establishing a label vector file of the polygonal area by referring to the synthetic images of five growth periods and field real measuring points of the winter wheat;
the specific method for converting the label vector file into a raster file and reclassifying to generate square vector data comprises the following steps:
converting the label vector file into a raster file, reclassifying the raster file into classes 0 and 1, randomly creating point elements in the polygonal area, then establishing a graphic buffer area for the point elements, generating square vector data with the size of 1280m by 1280m, and ensuring the boundaries of the square vector data to be all in the raster;
the sentienl-2 median composite image comprises:
red, green, blue and near infrared four bands;
adjusting the sizes of the cropped sentienl-2 median synthetic image and the raster file as follows: 128 pixels by 128 pixels.
Specifically, since the research area lacks of winter wheat label data, and the training process of U-Net requires an image data set and respective labels, the deep learning classification extraction of winter wheat is performed by self-making the data set by the following method.
Since the image data of the study area is large, it is time-consuming and labor-consuming to make the label, and therefore, a part of the required polygonal area is selected from the study area. The principle of selecting the area is as follows: selecting areas with strong ground object image features and obvious contrast; secondly, the selected area can cover all ground object types; ③ the selected area should come from each city in the study area.
And then in ArcMap, cutting a synthetic image of the whole growth cycle of the winter wheat by using polygonal surface elements of the polygonal area, creating a label vector file of the polygonal area by referring to the synthetic images of five growth periods and field real measuring points of the winter wheat, modifying the field value of the id of the label vector file, and setting a spatial reference of the label vector file.
And converting the label vector file into a raster file, reclassifying the raster file into classes 0 and 1, randomly creating point elements in the polygonal area, then establishing a graphic buffer area for the point elements, generating square vector data with the size of 1280m, wherein the boundary of the square vector data is ensured to be completely in the raster.
And using the square vector data to cut the sensory-2 median synthetic image (the sensory-2 median synthetic image comprises four bands of red, green, blue and near infrared) of five growth periods of the polygonal area and the raster file in batches, and adjusting the size of the sensory-2 median synthetic image and the raster file which are obtained by cutting by using a python program to be 128 pixels by 128 pixels.
And finishing the data set production. The data set is randomly divided into a training set, a verification set and a testing set according to the ratio of 7:2:1, wherein the training set comprises 3424 samples, the verification set comprises 857 samples, and the testing set comprises 458 samples. Before the images in the training set are used as the input of the training algorithm, the images in the training set are normalized, and the training data set is subjected to data expansion, namely, the images are subjected to rotation symmetry operation by adjusting the colors of the images to generate new images.
In step S2, the sensory-2 median synthetic images of five fertility sessions of the entire region of the study are segmented and processed to obtain a spatially distributed data set.
In some embodiments, in step S2, the method of segmenting and processing the sentinel-2 median synthetic images of five fertility phases of the entire area of the study to obtain the spatially distributed data set includes:
and cutting the sentinel-2 median composite image of five growth periods in the whole area in the study area into image blocks with the size of 512 pixels by 512 pixels, and removing the image blocks with all background values to obtain a spatial distribution data set.
In step S3, building a U-Net semantic segmentation model and setting parameters.
Specifically, the U-Net network structure is composed of a down-sampling part and an up-sampling part, and the whole network is similar to a 'U'. The first part performs feature extraction on the input image by means of convolutional layers and max pooling layers, each 3 x 3 convolutional layer followed by an activation function ReLU and a 2 x 2 max pooling operation. And the second part firstly carries out deconvolution operation, and after the deconvolution operation is carried out, the result and the corresponding characteristic diagram are spliced, so that the resolution is recovered, and a 1 x 1 convolution kernel is adopted in the final output layer. U-Net is based on the Encoder-Decoder structure, realizes the feature fusion through the mode of concatenation, and the structure is succinct and stable.
The hardware environment for the operation of the training, verifying and testing processes is as follows: inter (R) Xeon (R) Gold 6226R 2.9GHz 16 core processor, NVIDIA GeForce RTX 309024 GB graphics card, 256GB Haisha DDR4 memory. The software environment is python3.7, pytorch 1.7.1.
The learning rate is set to 1 × 10-7And the learning rate is adaptively adjusted by calling ReduceLROnPateau in the pytorech, namely when the loss function of the verification set does not decrease after 20 epochs any more, the learning rate is adjusted to one tenth of the original learning rate. Number of defined categories (number of classes): batch size (batch size): 128, number of bands (number of bands): 4, the number of training times (Epoch) is set to: 400, the input image size is set to: and 128 × 128 pixels, an Adam algorithm is selected on the network optimizer, and the loss function selects a cross entropy loss function. In addition, in order to prevent the overfitting and gradient disappearance phenomena of the model, a coefficient of 5 multiplied by 10 is added in the training process-4The L2 regular term of (1).
In step S4, training of the U-Net semantic segmentation model: and training the U-Net semantic segmentation model by taking the training data set and the verification data set of each growth period as input to obtain the U-Net semantic segmentation model under the optimal weight of each growth period.
Specifically, after setting of each parameter is completed, the same U-Net network model is used for model training of training and verification data sets of each growth period of the research area. In the model training process of 5 growth periods, setting a monitoring object as the accuracy of a verification set, and after the model is stable, automatically storing the optimal weight when the accuracy of the verification set is maximum, and storing the model under the optimal weight.
In step S5, a U-Net semantic segmentation model under the optimal weight for each growth period is called, and the test sets for each growth period are classified to obtain a classified result image.
In step S6, a U-Net semantic segmentation model under the optimal weight of each growth period is called, the spatial distribution data sets of each growth period are classified, and then the classification results are spliced to generate a winter wheat spatial distribution map of each growth period.
Specifically, a U-Net semantic segmentation model under the optimal weight of each growth period is called, the spatial distribution data sets of each growth period are classified, and then the classification results are spliced by a gdal library in python to generate a winter wheat spatial distribution map of each growth period.
And in the step S7, the classification accuracy of the winter wheat of the U-Net semantic segmentation model is evaluated.
In some embodiments, in the step S7, the specific method for assessing the classification accuracy of the winter wheat by the U-Net semantic segmentation model includes:
and comparing the classified result image with the self-made label, and quantitatively evaluating the winter wheat semantic segmentation accuracy of the five growth period test set images by using the precision rate, the recall rate, the F1-score, the cross-over ratio and the precision rate.
In particular, the amount of the solvent to be used,
in the formula: recall represents Recall, precision represents precision, Accuracy represents Accuracy, IoU represents cross-over ratio, TP represents number of true positives, TN represents true negatives, FP represents false positives, FN represents false negatives, and F1-score is a harmonic mean of precision and Recall.
As shown in fig. 2, it is a spatial distribution diagram of the winter wheat in the main area of north of yunna of 2019-2020, which is generated by the deep learning method of the present invention. Based on a U-Net semantic segmentation classification method, IoU obtained by different growth period test sets are respectively 0.78, 0.84, 0.86, 0.88 and 0.82, wherein the heading period of the jointing is the highest, and the model effect is the best. In addition, the precision rate, the recall rate, the F1 score and the precision rate of the jointing heading date are respectively 0.94, 0.93, 0.94 and 0.94.
In step S8, spatial mapping and area extraction of winter wheat: selecting a winter wheat spatial distribution map with the highest classification precision in a growth period, counting the area of the winter wheat to obtain an extraction area, and applying the extraction area and a ground true value to perform precision evaluation on the extraction area.
In some embodiments, in the step S8, the applying the extracted area and the ground true value, and the specific formula for performing the precision evaluation on the extracted area includes:
in the formula, P represents the area extraction precision, S represents the winter wheat planting area extracted by the deep learning method, and S' represents the real winter wheat planting area on the ground.
Specifically, the spatial distribution map of the winter wheat in the heading stage of the jointing is imported into the GEE, and an ee. The areas of the winter wheat of each pixel element in the period are summed to calculate the extraction area of the winter wheat of the whole research area. The winter wheat seeding area precision is the ratio of the estimated extraction area of a research area to the ground true value, and the precision evaluation is carried out on the extraction area by combining the official agricultural statistics yearbook and adopting the following formula:
in the formula, P represents the area extraction precision, S represents the winter wheat planting area extracted by the deep learning method, and S' represents the real winter wheat planting area on the ground.
The extraction areas of the winter wheat in the jointing and heading period research area of the deep learning method are 895.84 kilo hectares respectively, and the area extraction precision is 88.44%.
In conclusion, IoU obtained by the scheme provided by the invention in different growth period test sets are respectively 0.78, 0.84, 0.86, 0.88 and 0.82, wherein the heading period of the jointing is the highest, and the model effect is the best. In addition, the precision rate, the recall rate, the F1 score and the precision rate of the jointing heading date are respectively 0.94, 0.93, 0.94 and 0.94. The extraction areas of the winter wheat in the jointing and heading period research area of the deep learning method are 895.84 kilo hectares respectively, and the area extraction precision is 88.44%.
The present study also used FastFCN, deplab v3+ semantic segmentation networks in deep learning to evaluate the recognition accuracy of winter wheat at the heading date, as shown in table 1,
TABLE 1 evaluation indexes of winter wheat precision in heading stage of different semantic segmentation networks
Network | Rate of accuracy | Recall rate | F1 score | Rate of accuracy | Cross ratio of |
U-Net | 0.94 | 0.93 | 0.94 | 0.94 | 0.88 |
FastFCN | 0.91 | 0.91 | 0.91 | 0.92 | 0.86 |
DeeplabV3+ | 0.92 | 0.92 | 0.92 | 0.93 | 0.88 |
Compared with the two networks, the semantic segmentation performance of the winter wheat of the U-Net network under the small sample data is superior to that of other networks, the requirement on equipment is not too high, and the application of the U-Net semantic segmentation method in extraction of the winter wheat planting area is convenient to popularize.
The invention discloses a winter wheat remote sensing identification and analysis system based on deep learning. FIG. 3 is a structural diagram of a winter wheat remote sensing identification analysis system based on deep learning according to an embodiment of the invention; as shown in fig. 3, the system 100 includes:
the first processing module 101 is configured to select a polygonal area required by a part of a study area, create a label vector file of the polygonal area, convert the label vector file into a raster file, reclassify the raster file, generate square vector data, batch-cut the sentienl-2 median synthetic images and the raster file of five growth periods of the polygonal area by using the square vector data, and adjust the size of the cut sentienl-2 median synthetic images and the raster file to obtain the training data set, the verification data set and the test data set; the obtained sentinel-2 median synthetic images of the five growth periods share one label;
a second processing module 102, configured to cut and process the sentinel-2 median composite image of five growth periods of the whole area of the study region, so as to obtain a spatial distribution data set;
the third processing module 103 is configured to construct a U-Net semantic segmentation model and set parameters;
the fourth processing module 104 is configured to train the U-Net semantic segmentation model by using the training data set and the verification data set of each growth period as input, so as to obtain a U-Net semantic segmentation model under the optimal weight of each growth period;
the fifth processing module 105 is configured to call a U-Net semantic segmentation model under the optimal weight of each growth period, classify the test set of each growth period, and obtain a classified result image;
the sixth processing module 106 is configured to call a U-Net semantic segmentation model under the optimal weight of each growth period, classify the spatial distribution data sets of each growth period, and then splice the classification results to generate a winter wheat spatial distribution map of each growth period;
a seventh processing module 107, configured to evaluate the classification accuracy of winter wheat by using a U-Net semantic segmentation model;
the eighth processing module 108 is configured to select a winter wheat spatial distribution map in a growth period with the highest classification precision, count the winter wheat area to obtain an extraction area, and perform precision evaluation on the extraction area by using the extraction area and the ground real value.
According to the system of the second aspect of the present invention, the first processing module 101 is specifically configured to, the specific method for creating the tag vector file of the polygon area includes:
cutting a synthetic image of the winter wheat in the whole growth period by using the polygonal surface elements of the polygonal area, and establishing a label vector file of the polygonal area by referring to the synthetic images of five growth periods and field real measuring points of the winter wheat;
the specific method for converting the label vector file into a raster file and reclassifying to generate square vector data comprises the following steps:
converting the label vector file into a raster file, reclassifying the raster file into classes 0 and 1, randomly creating point elements in the polygonal area, then establishing a graphic buffer area for the point elements, generating square vector data with the size of 1280m by 1280m, and ensuring the boundaries of the square vector data to be all in the raster;
the sentienl-2 median composite image comprises:
red, green, blue and near infrared four bands;
adjusting the sizes of the cropped sentienl-2 median synthetic image and the raster file as follows: 128 pixels by 128 pixels.
Specifically, since the research area lacks of winter wheat label data, and the training process of U-Net requires an image data set and respective labels, the deep learning classification extraction of winter wheat is performed by self-making the data set by the following method.
Since the image data of the study area is large, it is time-consuming and labor-consuming to make the label, and therefore, a part of the required polygonal area is selected from the study area. The principle of selecting the area is as follows: selecting areas with strong ground object image features and obvious contrast; secondly, the selected area can cover all ground object types; ③ the selected area should come from each city in the study area.
And then in ArcMap, cutting a synthetic image of the whole growth cycle of the winter wheat by using polygonal surface elements of the polygonal area, creating a label vector file of the polygonal area by referring to the synthetic images of five growth periods and field real measuring points of the winter wheat, modifying the field value of the id of the label vector file, and setting a spatial reference of the label vector file.
And converting the label vector file into a raster file, reclassifying the raster file into classes 0 and 1, randomly creating point elements in the polygonal area, then establishing a graphic buffer area for the point elements, generating square vector data with the size of 1280m, wherein the boundary of the square vector data is ensured to be completely in the raster.
And using the square vector data to cut the sensory-2 median synthetic image (the sensory-2 median synthetic image comprises four bands of red, green, blue and near infrared) of five growth periods of the polygonal area and the raster file in batches, and adjusting the size of the sensory-2 median synthetic image and the raster file which are obtained by cutting by using a python program to be 128 pixels by 128 pixels.
And finishing the data set production. The data set was randomly divided into a training set, a validation set, and a test set at a 7:2:1 ratio, where the training set was 3424 samples, the validation set was 857 samples, and the test set was 458 samples. Before the training set images are used as the input of the training algorithm, the training set images are normalized, and the training data set is subjected to data expansion, namely, the images are subjected to rotation symmetry operation by adjusting the image colors of the training set images to generate new images.
In the system according to the second aspect of the present invention, the second processing module 102 is specifically configured to perform the cutting and processing on the sentinel-2 median synthetic images of five fertility sessions of the whole area of the study, so as to obtain the spatially distributed data set, specifically including:
and cutting the sentinel-2 median composite image of five growth periods in the whole area in the study area into image blocks with the size of 512 pixels by 512 pixels, and removing the image blocks with all background values to obtain a spatial distribution data set.
According to the system of the second aspect of the present invention, the third processing module 103 is specifically configured such that the U-Net network structure is composed of two parts of down-sampling and up-sampling, and the whole network is shaped like a 'U'. The first part performs feature extraction on the input image by means of convolutional layers and max pooling layers, each 3 x 3 convolutional layer followed by an activation function ReLU and a 2 x 2 max pooling operation. And the second part firstly carries out deconvolution operation, and after the deconvolution operation is carried out, the result and the corresponding characteristic diagram are spliced, so that the resolution is recovered, and a 1 x 1 convolution kernel is adopted in the final output layer. U-Net is based on the Encoder-Decoder structure, realizes the feature fusion through the mode of concatenation, and the structure is succinct and stable.
The hardware environment for the operation of the training, verifying and testing processes is as follows: inter (R) Xeon (R) Gold 6226R 2.9GHz 16 core processor, NVIDIA GeForce RTX 309024 GB graphics card, 256GB Haisha DDR4 memory. The software environment is python3.7, pytorch 1.7.1.
The learning rate is set to 1 × 10-7And the learning rate is adaptively adjusted by calling the ReduceLROnPlate in the pytorch, namely when the loss function of the verification set does not decrease after 20 epochs, the learning rate is adjusted to be one tenth of the original learning rate. Define number of classes (number of classes): batch size (batch size): 128, number of bands (number of bands): 4, the number of training times (Epoch) is set to: 400, the input image size is set to: and 128 × 128 pixels, an Adam algorithm is selected on the network optimizer, and the loss function selects a cross entropy loss function. In addition, in order to prevent the overfitting and gradient disappearance phenomena of the model, a coefficient of 5 multiplied by 10 is added in the training process-4The L2 regular term of (1).
According to the system of the second aspect of the present invention, the fourth processing module 104 is specifically configured to, after setting of each parameter, perform model training on training and verification data sets of each growth period of the research area respectively using the same U-Net network model. In the model training process of 5 growth periods, setting a monitoring object as the accuracy of a verification set, and after the model is stable, automatically storing the optimal weight when the accuracy of the verification set is maximum, and storing the model under the optimal weight.
According to the system of the second aspect of the present invention, the sixth processing module 106 is specifically configured to invoke a U-Net semantic segmentation model under the optimal weight of each growth period, classify the spatial distribution data sets of each growth period, and then splice the classification results with a gdal library in python to generate a spatial distribution map of winter wheat in each growth period.
According to the system of the second aspect of the present invention, the seventh processing module 107 is specifically configured such that the specific method for assessing the classification accuracy of winter wheat by using the U-Net semantic segmentation model includes:
and comparing the classified result image with the self-made label, and quantitatively evaluating the winter wheat semantic segmentation accuracy of the five growth period test set images by using the precision rate, the recall rate, the F1-score, the cross-over ratio and the precision rate.
In particular, the amount of the solvent to be used,
in the formula: recall represents Recall, precision represents precision, Accuracy represents Accuracy, IoU represents cross-over ratio, TP represents number of true positives, TN represents true negatives, FP represents false positives, FN represents false negatives, and F1-score is a harmonic mean of precision and Recall.
According to the system of the second aspect of the present invention, the eighth processing module 108 is specifically configured to, by applying the extracted area and the ground truth value, perform a specific formula for evaluating the accuracy of the extracted area, where the specific formula includes:
in the formula, P represents the area extraction precision, S represents the winter wheat planting area extracted by the deep learning method, and S' represents the real winter wheat planting area on the ground.
Specifically, the spatial distribution map of the winter wheat in the heading stage of the jointing is imported into the GEE, and an ee. The areas of the winter wheat of each pixel element in the period are summed to calculate the extraction area of the winter wheat of the whole research area. The winter wheat seeding area precision is the ratio of the estimated extraction area of a research area to the ground true value, and the precision evaluation is carried out on the extraction area by combining the official agricultural statistics yearbook and adopting the following formula:
in the formula, P represents the area extraction precision, S represents the winter wheat planting area extracted by the deep learning method, and S' represents the real winter wheat planting area on the ground.
A third aspect of the invention discloses an electronic device. The electronic equipment comprises a memory and a processor, the memory stores a computer program, and when the processor executes the computer program, the steps of the winter wheat remote sensing identification analysis method based on deep learning in any one of the first aspect of the disclosure are realized.
Fig. 4 is a block diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 4, the electronic device includes a processor, a memory, a communication interface, a display screen, and an input device, which are connected by a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the electronic device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, Near Field Communication (NFC) or other technologies. The display screen of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the electronic equipment, an external keyboard, a touch pad or a mouse and the like.
It will be understood by those skilled in the art that the structure shown in fig. 4 is only a partial block diagram related to the technical solution of the present disclosure, and does not constitute a limitation of the electronic device to which the solution of the present application is applied, and a specific electronic device may include more or less components than those shown in the drawings, or combine some components, or have a different arrangement of components.
A fourth aspect of the invention discloses a computer-readable storage medium. The computer readable storage medium stores a computer program, and when the computer program is executed by a processor, the computer program realizes the steps in the method for remote sensing, identifying and analyzing winter wheat based on deep learning in any one of the first aspect of the disclosure.
It should be noted that the technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, the scope of the present description should be considered. The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A winter wheat remote sensing identification analysis method based on deep learning is characterized by comprising the following steps:
step S1, training, validation, and test data set creation: selecting a polygonal area required by a part of a research area, creating a label vector file of the polygonal area, converting the label vector file into a raster file, reclassifying to generate square vector data, cutting the sendienl-2 median synthetic images of five growth periods of the polygonal area and the raster file in batch by using the square vector data, and adjusting the size of the cut sendienl-2 median synthetic images and the raster file to obtain a training data set, a verification data set and a test data set; the obtained sentinel-2 median synthetic images of the five growth periods share one label;
step S2, cutting and processing the sentinel-2 median synthetic images of five growth periods in the whole region of the research area to obtain a spatial distribution data set;
s3, building a U-Net semantic segmentation model and setting parameters;
step S4, training a U-Net semantic segmentation model: training the U-Net semantic segmentation model by taking the training data set and the verification data set of each growth period as input to obtain the U-Net semantic segmentation model under the optimal weight of each growth period;
step S5, calling a U-Net semantic segmentation model under the optimal weight of each growth period, and classifying the test set of each growth period to obtain a classified result image;
s6, calling a U-Net semantic segmentation model under the optimal weight of each growth period, classifying the spatial distribution data sets of each growth period, and splicing classification results to generate a winter wheat spatial distribution map of each growth period;
s7, evaluating the classification precision of winter wheat by a U-Net semantic segmentation model;
step S8, spatial mapping and area extraction of winter wheat: selecting a winter wheat spatial distribution map with the highest classification precision in a growth period, counting the area of the winter wheat to obtain an extraction area, and applying the extraction area and a ground true value to perform precision evaluation on the extraction area.
2. The remote sensing identification analysis method for winter wheat based on deep learning of claim 1, wherein in the step S1, the specific method for creating the label vector file of the polygonal area comprises:
and cutting a synthetic image of the whole growth cycle of the winter wheat by using the polygonal surface elements of the polygonal area, and establishing a label vector file of the polygonal area by referring to the synthetic images of five growth periods and field real measuring points of the winter wheat.
3. The remote sensing identification analysis method for winter wheat based on deep learning of claim 1, wherein in step S1, the specific method for converting the label vector file into a raster file, and reclassifying to generate square vector data includes:
and converting the label vector file into a raster file, reclassifying the raster file into classes 0 and 1, randomly creating point elements in the polygonal area, then establishing a graphic buffer area for the point elements, generating square vector data with the size of 1280m, wherein the boundary of the square vector data is ensured to be completely in the raster.
4. The winter wheat remote sensing identification and analysis method based on deep learning of claim 1, wherein in the step S1, the sentinel-2 median synthetic image comprises:
red, green, blue and near infrared four bands;
adjusting the sizes of the cropped sentienl-2 median synthetic image and the raster file as follows: 128 pixels by 128 pixels.
5. The remote sensing, recognizing and analyzing method for winter wheat based on deep learning of claim 1, wherein in step S2, the specific method for obtaining the spatial distribution data set by cutting and processing the sentinel-2 median synthetic image of five growth periods in the whole area of the study area comprises:
and cutting the sentinel-2 median composite image of five growth periods in the whole area in the research area into image blocks with the size of 512 pixels by 512 pixels, and removing the image blocks with the values of all background values to obtain a spatial distribution data set.
6. The remote sensing recognition analysis method for winter wheat based on deep learning of claim 1, wherein in the step S7, the specific method for assessing the classification accuracy of winter wheat by the U-Net semantic segmentation model comprises:
and comparing the classified result image with the self-made label, and quantitatively evaluating the winter wheat semantic segmentation accuracy of the five growth period test set images by using the precision rate, the recall rate, the F1-score, the cross-over ratio and the precision rate.
7. The winter wheat remote sensing identification analysis method based on deep learning of claim 1, wherein in the step S8, the specific formula for applying the extracted area and the ground true value to perform the precision evaluation on the extracted area comprises:
in the formula, P represents the area extraction precision, S represents the winter wheat planting area extracted by the deep learning method, and S' represents the real winter wheat planting area on the ground.
8. A winter wheat remote sensing identification analysis system for deep learning based, characterized in that the system comprises:
the first processing module is configured to select a polygonal area required by a part of a study area, create a label vector file of the polygonal area, convert the label vector file into a raster file, reclassify the raster file, generate square vector data, batch-cut the sentienl-2 median synthetic images and the raster file of five growth periods of the polygonal area by using the square vector data, and adjust the size of the cut sentienl-2 median synthetic images and the raster file to obtain a training data set, a verification data set and a test data set; the obtained sentinel-2 median synthetic images of the five growth periods share one label;
the second processing module is configured to cut and process the sentinel-2 median synthetic images of five growth periods of the whole region of the research area to obtain a spatial distribution data set;
the third processing module is configured to build a U-Net semantic segmentation model and set parameters;
the fourth processing module is configured to train the U-Net semantic segmentation model by taking the training data set and the verification data set of each growth period as input to obtain the U-Net semantic segmentation model under the optimal weight of each growth period;
the fifth processing module is configured to call a U-Net semantic segmentation model under the optimal weight of each growth period, classify the test set of each growth period and obtain a classified result image;
the sixth processing module is configured to call a U-Net semantic segmentation model under the optimal weight of each growth period, classify the spatial distribution data sets of each growth period, and then splice classification results to generate a winter wheat spatial distribution map of each growth period;
the seventh processing module is configured to evaluate the classification precision of winter wheat of the U-Net semantic segmentation model;
and the eighth processing module is configured to select the winter wheat spatial distribution map in the growth period with the highest classification precision, count the area of the winter wheat to obtain an extraction area, and perform precision evaluation on the extraction area by applying the extraction area and the ground real value.
9. An electronic device, comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the remote sensing identification analysis method for winter wheat based on deep learning according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements the steps of the method for remote recognition and analysis of winter wheat based on deep learning according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210117044.7A CN114463637B (en) | 2022-02-07 | 2022-02-07 | Winter wheat remote sensing identification analysis method and system based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210117044.7A CN114463637B (en) | 2022-02-07 | 2022-02-07 | Winter wheat remote sensing identification analysis method and system based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114463637A true CN114463637A (en) | 2022-05-10 |
CN114463637B CN114463637B (en) | 2023-04-07 |
Family
ID=81411499
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210117044.7A Active CN114463637B (en) | 2022-02-07 | 2022-02-07 | Winter wheat remote sensing identification analysis method and system based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114463637B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115082808A (en) * | 2022-06-17 | 2022-09-20 | 安徽大学 | Soybean planting region extraction method based on high-score first data and U-Net model |
CN115578637A (en) * | 2022-10-17 | 2023-01-06 | 中国科学院空天信息创新研究院 | Winter wheat yield estimation analysis method and system based on long-term and short-term memory network |
CN115690585A (en) * | 2022-11-11 | 2023-02-03 | 中国科学院空天信息创新研究院 | Method and system for extracting tillering number of wheat based on digital photo |
CN116052141A (en) * | 2023-03-30 | 2023-05-02 | 北京市农林科学院智能装备技术研究中心 | Crop growth period identification method, device, equipment and medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111460936A (en) * | 2020-03-18 | 2020-07-28 | 中国地质大学(武汉) | Remote sensing image building extraction method, system and electronic equipment based on U-Net network |
CN112183428A (en) * | 2020-10-09 | 2021-01-05 | 浙江大学中原研究院 | Wheat planting area segmentation and yield prediction method |
CN113487638A (en) * | 2021-07-06 | 2021-10-08 | 南通创越时空数据科技有限公司 | Ground feature edge detection method of high-precision semantic segmentation algorithm U2-net |
US20220215662A1 (en) * | 2021-01-06 | 2022-07-07 | Dalian University Of Technology | Video semantic segmentation method based on active learning |
-
2022
- 2022-02-07 CN CN202210117044.7A patent/CN114463637B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111460936A (en) * | 2020-03-18 | 2020-07-28 | 中国地质大学(武汉) | Remote sensing image building extraction method, system and electronic equipment based on U-Net network |
CN112183428A (en) * | 2020-10-09 | 2021-01-05 | 浙江大学中原研究院 | Wheat planting area segmentation and yield prediction method |
US20220215662A1 (en) * | 2021-01-06 | 2022-07-07 | Dalian University Of Technology | Video semantic segmentation method based on active learning |
CN113487638A (en) * | 2021-07-06 | 2021-10-08 | 南通创越时空数据科技有限公司 | Ground feature edge detection method of high-precision semantic segmentation algorithm U2-net |
Non-Patent Citations (1)
Title |
---|
刘同星等: "基于无人机影像的冬小麦深度学习分类", 《中国农业资源与区划》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115082808A (en) * | 2022-06-17 | 2022-09-20 | 安徽大学 | Soybean planting region extraction method based on high-score first data and U-Net model |
CN115082808B (en) * | 2022-06-17 | 2023-05-09 | 安徽大学 | Soybean planting area extraction method based on high-resolution first data and U-Net model |
CN115578637A (en) * | 2022-10-17 | 2023-01-06 | 中国科学院空天信息创新研究院 | Winter wheat yield estimation analysis method and system based on long-term and short-term memory network |
CN115690585A (en) * | 2022-11-11 | 2023-02-03 | 中国科学院空天信息创新研究院 | Method and system for extracting tillering number of wheat based on digital photo |
CN116052141A (en) * | 2023-03-30 | 2023-05-02 | 北京市农林科学院智能装备技术研究中心 | Crop growth period identification method, device, equipment and medium |
CN116052141B (en) * | 2023-03-30 | 2023-06-27 | 北京市农林科学院智能装备技术研究中心 | Crop growth period identification method, device, equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
CN114463637B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114463637B (en) | Winter wheat remote sensing identification analysis method and system based on deep learning | |
CN106778682B (en) | A kind of training method and its equipment of convolutional neural networks model | |
CN111126258A (en) | Image recognition method and related device | |
CN114092833B (en) | Remote sensing image classification method and device, computer equipment and storage medium | |
CN111160114B (en) | Gesture recognition method, gesture recognition device, gesture recognition equipment and computer-readable storage medium | |
CN111695463A (en) | Training method of face impurity detection model and face impurity detection method | |
CN113901900A (en) | Unsupervised change detection method and system for homologous or heterologous remote sensing image | |
CN110570440A (en) | Image automatic segmentation method and device based on deep learning edge detection | |
CN112949738B (en) | Multi-class unbalanced hyperspectral image classification method based on EECNN algorithm | |
CN114241326B (en) | Progressive intelligent production method and system for ground feature elements of remote sensing images | |
CN111222545B (en) | Image classification method based on linear programming incremental learning | |
CN111079807B (en) | Ground object classification method and device | |
CN113435254A (en) | Sentinel second image-based farmland deep learning extraction method | |
CN115953612A (en) | ConvNeXt-based remote sensing image vegetation classification method and device | |
CN113096080B (en) | Image analysis method and system | |
CN116863345A (en) | High-resolution image farmland recognition method based on dual attention and scale fusion | |
CN117197462A (en) | Lightweight foundation cloud segmentation method and system based on multi-scale feature fusion and alignment | |
CN117237599A (en) | Image target detection method and device | |
CN116664954A (en) | Hyperspectral ground object classification method based on graph convolution and convolution fusion | |
CN115908363A (en) | Tumor cell counting method, device, equipment and storage medium | |
CN113096079B (en) | Image analysis system and construction method thereof | |
CN111931721B (en) | Method and device for detecting color and number of annual inspection label and electronic equipment | |
CN116342628B (en) | Pathological image segmentation method, pathological image segmentation device and computer equipment | |
CN109190451B (en) | Remote sensing image vehicle detection method based on LFP characteristics | |
CN117197479A (en) | Image analysis method, device, computer equipment and storage medium applying corn ear outer surface |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |