CN109993220A - Multi-source Remote Sensing Images Classification method based on two-way attention fused neural network - Google Patents
Multi-source Remote Sensing Images Classification method based on two-way attention fused neural network Download PDFInfo
- Publication number
- CN109993220A CN109993220A CN201910224541.5A CN201910224541A CN109993220A CN 109993220 A CN109993220 A CN 109993220A CN 201910224541 A CN201910224541 A CN 201910224541A CN 109993220 A CN109993220 A CN 109993220A
- Authority
- CN
- China
- Prior art keywords
- sample
- lidar
- feature
- layer
- hyperion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of multi-source Remote Sensing Images Classification method based on two-way attention fused neural network, mainly solve the problems, such as that the prior art is low to multi-source Remote Sensing Images Classification precision.Its implementation are as follows: 1) high-spectral data and laser radar data are pre-processed and divided, obtain training sample and test sample;2) attention fused layer of the design based on attention mechanism is weighted screening and fusion to spectroscopic data and laser radar data, and it constructs two-way and connects convolutional neural networks, 3) convolutional neural networks that connect are trained using multiclass cross entropy as loss function, obtain trained network model, 4) test sample is predicted using trained model, obtains final classification results.The present invention can extract the feature of multi- source Remote Sensing Data data and carry out effective integrated classification to it, improve the problem that dimension is excessively high in fusion, improve average nicety of grading, can be used for merging the remote sensing images that two width different sensors obtain.
Description
Technical field
The invention belongs to technical field of image processing, in particular to a kind of Classifying Method in Remote Sensing Image can be used for two width
The remote sensing images that different sensors obtain carry out integrated classification.
Background technique
In recent years, remote sensor quantity increases very rapid, and people can obtain the multi-source data of Same Scene simultaneously
Collection, this different information for integrate different sensors capture.Such as multispectral image MSI or high spectrum image
HSI is usually made of multiple spectrum channels of Same Scene, is included detailed spectrum and spatial information, is provided and accurately distinguish sense
The ability of interesting material.On the other hand, laser radar LiDAR data can elevation to scene and object height information carry out table
Show, helps to distinguish the object for being made of that but height is different analog material.Therefore, the integrated of both sensor informations can mention
For detailed spectral-spatial-elevation information, to further improve classification performance.
The classification of remote sensing images is usually complicated and challenging, in order to make full use of high spectrum image and LiDAR
Data, current integrated classification method probably have following several.A kind of method is using laser radar data as a channel heap
It is laminated in high spectrum image, and classifies to the image of heap poststack, this method, can due to the identification feature of laser radar
Classification performance is improved, however, the connection of a simple primary characteristic may be not enough to separate classification interested.Second method
It is that traditional feature extraction is carried out to high spectrum image and LiDAR image respectively first, such as morphological feature, wavelet character, line
Feature etc. is managed, support vector machines, feature of the traditional classifiers such as random forest RF to the two kinds of images extracted are then used
Classify.
Limited capacity of the traditional classification method of both the above in feature learning and classification, firstly, traditional characteristic may
It will lead to the underutilization to spatial information, in addition, the feature quantity that different sensors are extracted is more, although can be more detailed
The information of two kinds of images is characterized, but will cause serious dimension disaster simultaneously.In recent years, the method quilt of deep learning
Feature is extracted in proposition in a hierarchical fashion, provides direction for the multi-source Remote Sensing Image Fusion based on depth characteristic.Convolutional Neural
Network C NN simulates " local field of view " concept in human visual system, and full connection is converted to local connection, is connected using part
Processing space dependence is connect, the number of parameters for needing training is significantly reduced, calculating cost is reduced, in addition, convolutional Neural
Network has the ability for learning layering and autonomous learning abundant, can extract according to different data sources is adaptive
Suitable feature, therefore be adapted to carry out the integrated classification of high spectrum image and LiDAR image.
The current integrated classification method based on convolutional neural networks, usually step is: firstly, respectively to EO-1 hyperion number
Depth characteristic extraction is carried out according to convolutional neural networks similar with two structures of LiDAR data building, then mentions two networks
The feature of the two kinds of images taken carries out dimension variation, and cascade becomes one group of feature, and full articulamentum is finally recycled to carry out image
Classification.Such methods can utilize the advantages of convolutional neural networks, effectively special to high spectrum image and LiDAR image zooming-out
Sign, and these features are applied to classification problem, still, this model is when carrying out integrated classification, however it remains centainly not
Foot place: high spectrum image network and LiDAR data network first is separated from each other, and cannot carry out the circulation of characteristic information well
And fusion;Secondly, the feature for two kinds of images that convolutional neural networks extract is only input after simple feature cascade
Classify to full articulamentum, there is no the screening and fusion that carry out feature, this will lead to feature dimension is excessively high and feature
The generation of redundancy phenomena.
Summary of the invention
It is a kind of based on two-way attention fusion nerve it is an object of the invention in view of the above shortcomings of the prior art, propose
The multi-source Remote Sensing Image Fusion classification method of network is reinforced circulation between data, is mentioned to reduce the degree of redundancy of characteristic information
The nicety of grading of high multi-source Remote Sensing Images.
Technical thought of the invention is: the interactive connection convolutional neural networks by establishing a two-way, respectively to bloom
Modal data and LiDAR data carry out feature extraction;By designing a fused layer based on attention mechanism, to by convolution mind
The feature of high-spectral data and LiDAR data after network carries out study screening and fusion, and obtained fusion feature is connected
It is connected to full articulamentum and carries out classification output, implementation includes the following:
(1) input the panel height spectrum picture and a width laser radar LiDAR image that have been registrated, building training sample set and
Test sample collection:
(1a) inputs high-spectrum remote sensing, has in the image comprising M and marks pixel and N number of unmarked pixel,
Each pixel of image is a sample, and M marked sample constitutes EO-1 hyperion training sample set H, N number of unmarked sample
Constitute EO-1 hyperion test sample collection He;
(1b) inputs laser radar LiDAR image, wherein has and marks the number of pixel for M, and coordinate and high-spectrum
There is the coordinate of label pixel identical as in, the number of unmarked pixel is N, and unmarked picture in coordinate and high spectrum image
The coordinate of vegetarian refreshments is identical, each pixel of image is a sample, and M marked sample constitutes LiDAR training sample set
R, N number of unmarked sample constitute LiDAR test sample collection Re.
(2) be based on attention mechanism method, design attention Fusion Features layer, for EO-1 hyperion sample characteristics and
LiDAR sample characteristics carry out the weighting study screening and fusion of selectivity;
(3) building two-way connects convolutional neural networks:
(3a) uses dual input method, and two of the convolutional neural networks inputs that connect of design two-way be respectively size are W
The EO-1 hyperion sample and size of × W × D is the LiDAR sample of W × W;
(3b) separately designs branch's convolutional neural networks that two-way is directed to EO-1 hyperion sample set and LiDAR sample set, passes through
Interconnection layers in two-way branch neural network carry out the circulation fusion of feature, wherein EO-1 hyperion branching networks are rolled up by three
Lamination and three pond layer compositions, LiDAR branching networks are by three convolutional layers, three pond layers and two interconnection layers groups
At;
(3c) is sequentially connected an attention fused layer after each branch's convolutional neural networks, two full articulamentums and
One output layer completes two-way and connects the buildings of convolutional neural networks, and the connect output of convolutional neural networks of the two-way is
The generic of each sample;
(4) adaptive moments estimation optimization algorithm is used, the true of multi-class cross entropy loss function and training sample is passed through
Classification p (H, R), the two-way designed in training step (3) connect convolutional neural networks:
EO-1 hyperion training sample set H and LiDAR training sample set R is separately input to two-way and connected convolutional Neural by (4a)
In network, after two-way branching networks, EO-1 hyperion sample characteristics X is obtained1With LiDAR sample characteristics X2, by both features
Attention fused layer is input to after progress dimension transformation to be screened and merged, and fusion feature X is obtained,
(4b) sequentially inputs fusion feature X to full articulamentum and output layer, obtains the affiliated pre- of each training sample
Survey classification;
(4c) repeats (4a)-(4b) step, until the two-way convolutional neural networks that connect reach convergence, obtains comprising training
Good two-way connects convolutional neural networks model;
(5) classification prediction is carried out to test sample:
EO-1 hyperion test sample collection He and LiDAR test sample collection Re is input to trained two-way convolutional neural networks
In model, the generic of each test sample is obtained.
Compared with prior art, the present invention having the advantage that
First, the present invention is devised when carrying out high-spectral data and LiDAR data feature extraction containing interactive connection
The feature of high-spectral data is circulated in the feature of LiDAR data by two-way branch convolutional neural networks, this makes LiDAR special
The not simple elevation information comprising scene in sign, information is more abundant, improves two kinds of remotely-sensed data phases in existing method
Mutually the problem of separation, so that the integrated classification result of multi-source Remote Sensing Images is more accurate.
Second, the present invention to EO-1 hyperion branch neural network network and LiDAR branch neural network extraction feature into
When row fusion, the Fusion Features layer based on attention mechanism is devised, is carried out using feature of the attention method to two kinds of images
The weight of weighting screening and fusion, fused layer is obtained by interactive connection convolutional neural networks by training, is effectively schemed to two kinds
The feature of picture is screened, and unessential feature is filtered out, and is assigned important feature biggish weight proportion, is reduced feature
Redundancy and dimension, improve remote sensing image fusion classification precision.
Detailed description of the invention
Fig. 1 is realization general flow chart of the invention;
Fig. 2 is remote sensing scene image and true tag used in the present invention emulates;
Fig. 3 is with the simulation result diagram of the invention classified to image in Fig. 2 and result enlarged drawing;
Fig. 4 is to the simulation result diagram and result enlarged drawing classified with the prior art to image in Fig. 2.
Specific embodiment
Specific example and effect of the invention are described in further detail below in conjunction with attached drawing.
Referring to Fig.1, steps are as follows for the realization of this example:
Step 1, the panel height spectrum picture and a width laser radar LiDAR image being registrated are inputted, training sample is constructed
Collection and test sample collection.
High-spectrum remote sensing 1a) is inputted, has in the image comprising M and marks pixel and N number of unmarked pixel, figure
Each pixel of picture is a sample, and M marked sample constitutes EO-1 hyperion training sample set H, N number of unmarked sample structure
At EO-1 hyperion test sample collection He, in this example, marked sample number M is 66485, and unmarked sample number N is
598360;
1b) input laser radar LiDAR image, wherein have and mark the number of pixel for M, and coordinate and high-spectrum
There is the coordinate of label pixel identical as in, the number of unmarked pixel is N, and unmarked picture in coordinate and high spectrum image
The coordinate of vegetarian refreshments is identical, each pixel of image is a sample, and M marked sample constitutes LiDAR training sample set
R, N number of unmarked sample constitute LiDAR test sample collection Re, and in this example, marked sample number M is 66485, unmarked
Number of samples N is 598360;
Step 2, building two-way connects convolutional neural networks.
2a) use dual input method, design two-way connect two of convolutional neural networks inputs be respectively size be W ×
The EO-1 hyperion sample and size of W × D is the LiDAR sample of W × W, and in this example, two-way connects the two of convolutional neural networks
Kind input sample size is respectively 41 × 41 × 3 and 41 × 41;
2b) separately design branch's convolutional neural networks that two-way is directed to EO-1 hyperion sample set and LiDAR sample set:
2b1) design is directed to branch's convolutional neural networks of EO-1 hyperion sample set, totally six layers of the network, is followed successively by the first volume
The-the first pond of lamination the-the second pond of the-the second convolutional layer of layer layer-third convolutional layer-third pond layer, wherein each convolutional layer is equal
Feature extraction is carried out to EO-1 hyperion sample, after each pond layer carries out dimension transformation to the feature that a upper convolutional layer extracts,
Obtain bloom spectrum signature S;
2b2) design is directed to branch's convolutional neural networks of LiDAR sample set, totally eight layers of the network, is followed successively by the 1st convolution
- the 1 the-the 3 pond of pond layer the-the 1 interconnection layers the-the 2 convolutional layer the-the 2 pond layer the-the 3 convolutional layer of the-the 2 interconnection layers of layer
Change layer, wherein each convolutional layer carries out feature extraction to LiDAR sample, and each pond layer extracts a upper convolutional layer
After LiDAR feature carries out dimension transformation, obtain LiDAR feature L, interconnection layers by bloom spectrum signature S and LiDAR feature L into
Row connection, obtains LiDAR connection features F;
The circulation fusion of feature is carried out by the interconnection layers in above-mentioned two-way branch neural network.
It 2c) is based on attention mechanism method, designs attention Fusion Features layer:
The feature and LiDAR training sample for 2c1) obtaining EO-1 hyperion training sample by branching networks are by branching networks
Obtained feature is attached, and expression formula is as follows:
xi=concat (x1i,x2i)
Wherein, i=1,2..., M, M are training sample sum, x1iIndicate that i-th of EO-1 hyperion training sample is rolled up by branch
The feature obtained after product network, x2iIndicate the feature that i-th of LiDAR training sample obtains after branch's convolutional network, xiTable
Show the feature vector after i-th of EO-1 hyperion training sample feature and i-th of LiDAR training sample feature are attached;
2c2) initialize a weight matrix and a bias vector, by weight matrix with connect after feature vector xiPhase
Multiply, then tangent function calculating is added and carried out with bias vector, obtains the tangent vector e by tangent function operationi:
ei=tanh (xiW+b),
Wherein, w and b respectively indicates the weight matrix and bias vector of attention fused layer;
2c3) to tangent vector eiExponential function calculating is normalized, obtains normalized vector ai:
ai=softmax (ei)
Wherein, softmax indicates normalization exponential function;
2c4) by the feature vector x by connectioniWith normalized vector aiTransposition be multiplied, obtain the fusion of i-th of sample
Feature fi=ai Txi, wherein T indicates transposition;
2c1-2c4 2c5) is respectively repeated steps to M training sample, obtains the fusion feature of all samples, is constituted complete
Attention fused layer;
It 2d) is sequentially connected two full articulamentums and an output layer after attention fused layer, completes two-way and connect volume
Product neural network building, the two-way connect convolutional neural networks output be each sample generic, in this example
In, classification sum belonging to sample is 15.
Step 3, using adaptive moments estimation optimization algorithm, pass through the true of multi-class cross entropy loss function and training sample
Real classification p (H, R), training two-way connect convolutional neural networks.
EO-1 hyperion training sample set H and LiDAR training sample set R 3a) is separately input to two-way to connect convolutional Neural
In network, after two-way branching networks, EO-1 hyperion sample characteristics X is obtained1With LiDAR sample characteristics X2, by both features
Attention fused layer is input to after progress dimension transformation to be screened and merged, and fusion feature X is obtained;
3b) fusion feature X is sequentially input to full articulamentum and output layer, obtains the prediction classification q of each training sample
(H, R), using multi-class cross entropy loss function Y (p, q)=- ∑ p (H, R) log (q (H, R)) to the true class of training sample
Other p (H, R) and prediction classification q (H, R) are calculated, and are obtained prediction the distance between classification and true classification Y (p, q), are passed through
The distance carries out backpropagation to the convolutional neural networks that connect;
3c) repeat step 3a)-step 3b), until the two-way convolutional neural networks that connect reach convergence, trained
Two-way connect convolutional neural networks model, in this example, step 3a)-step 3b) is repeated 200 times.
Step 4, classification prediction is carried out to test sample.
EO-1 hyperion test sample collection and LiDAR test sample collection are input to trained two-way convolutional neural networks model
In, obtain the generic of each test sample.
Below in conjunction with emulation experiment, technical effect of the invention is described further:
1. simulated conditions
In this emulation experiment, used data are the high-spectrum remote-sensing number of University of Houston campus and its surrounding cities
According to and LiDAR data, as shown in Figure 2, wherein Fig. 2 (a) is the remote sensing images that use of emulation, and Fig. 2 (b) is the remote sensing images
True classification, the size of data are 349 × 1905, and high-spectral data includes 144 spectrum segments, shares 15 kinds of different ground species
Not;
Emulation platform are as follows: dominant frequency is the memory of Intel Core the i5-6500 CPU, 8GB of 3.20GHz, windows10
Operating system, Keras and TensorFlow deep learning platform, Python.
2. emulation content and interpretation of result
Emulation 1, emulates remote sensing images using the present invention, as a result as shown in figure 3, wherein Fig. 3 (a) is with the present invention
To the simulation result diagram of remote sensing images, Fig. 3 (b)-Fig. 3 (d)) it is amplification detail view to Fig. 3 (a);
Emulation 2, emulates remote sensing images using the prior art, as a result as shown in figure 4, wherein Fig. 4 (a) is with existing
For technology to the simulation result diagram of remote sensing images, Fig. 4 (b)-Fig. 4 (d) is the amplification detail view of Fig. 4 (a);
From the comparison of Fig. 2 (b) and Fig. 3 (a) as can be seen that simulation result diagram of the invention has been reached substantially with true tag
Unanimously.
It is more accurate in terms of details of classifying to can be seen that the present invention from the comparison of Fig. 3 and Fig. 4, and the prior art exists
The sample of mistake point, the present invention are promoted by 92% to 98.5% compared to existing method, the average nicety of grading of simulation result;
Above-mentioned simulation result shows the present invention using the two-way convolutional neural networks of interactive connection and based on attention mechanism
Fusion Features strategy, the information of high spectrum image and LiDAR image is circulated, study is carried out to its feature and screening and is melted
It closes, eventually for remote sensing terrain classification, integrated classification result is more accurate complete, and average nicety of grading is higher.
Claims (4)
1. a kind of multi-source Remote Sensing Images Classification method based on two-way attention fused neural network, includes the following:
(1) the panel height spectrum picture and a width laser radar LiDAR image being registrated are inputted, training sample set and test are constructed
Sample set:
(1a) inputs high-spectrum remote sensing, has in the image comprising M and marks pixel and N number of unmarked pixel, image
Each pixel be a sample, M marked sample constitutes EO-1 hyperion training sample set H, N number of unmarked sample composition
EO-1 hyperion test sample collection He;
(1b) inputs laser radar LiDAR image, wherein the number for having label pixel is M, and in coordinate and high spectrum image
There is the coordinate of label pixel identical, the number of unmarked pixel is N, and unmarked pixel in coordinate and high spectrum image
Coordinate it is identical, each pixel of image is a sample, M marked sample composition LiDAR training sample set R, N number of
Unmarked sample constitutes LiDAR test sample collection Re.
(2) it is based on attention mechanism method, designs attention Fusion Features layer, for EO-1 hyperion sample characteristics and LiDAR sample
Eigen carries out the weighting study screening and fusion of selectivity;
(3) building two-way connects convolutional neural networks:
(3a) uses dual input method, design two-way connect two of convolutional neural networks inputs be respectively EO-1 hyperion sample and
LiDAR sample;
(3b) separately designs branch's convolutional neural networks that two-way is directed to EO-1 hyperion sample set and LiDAR sample set, passes through two-way
Interconnection layers in branch's neural network carry out the circulation fusion of feature, wherein EO-1 hyperion branching networks are by three convolutional layers
It is formed with three pond layers, LiDAR branching networks are made of three convolutional layers, three pond layers and two interconnection layers;
(3c) is sequentially connected an attention fused layer after each branch's convolutional neural networks, two full articulamentums and one
Output layer completes two-way and connects the buildings of convolutional neural networks, and the connect output of convolutional neural networks of the two-way is each
The generic of sample;
(4) use adaptive moments estimation optimization algorithm, by the true p of multi-class cross entropy loss function and training sample (H,
R), the two-way designed in training step (3) connects convolutional neural networks:
EO-1 hyperion training sample set H and LiDAR training sample set R is separately input to two-way and connected convolutional neural networks by (4a)
In, after two-way branching networks, obtain EO-1 hyperion sample characteristics X1With LiDAR sample characteristics X2, both features are carried out
Attention fused layer is input to after dimension transformation to be screened and merged, and fusion feature X is obtained,
(4b) sequentially inputs fusion feature X to full articulamentum and output layer, obtains the affiliated prediction class of each training sample
Not;
(4c) repeats (4a)-(4b) step, until the two-way convolutional neural networks that connect reach convergence, obtains including trained
Two-way connects convolutional neural networks model;
(5) classification prediction is carried out to test sample:
EO-1 hyperion test sample collection He and LiDAR test sample collection Re is input to trained two-way convolutional neural networks model
In, obtain the generic of each test sample.
2. design attention is special the method according to claim 1, wherein being based on attention mechanism method in (2)
Fused layer is levied, is accomplished by
The feature and laser radar LiDAR training sample that (2a) obtains EO-1 hyperion training sample by branching networks are by branch
The feature that network obtains is attached, and expression formula is as follows:
xi=concat (x1i,x2i)
Wherein, x1iIndicate the feature that i-th of EO-1 hyperion training sample obtains after branch's convolutional network, x2iIt indicates i-th
The feature that LiDAR training sample obtains after branch's convolutional network, xiIndicate i-th of EO-1 hyperion training sample feature and i-th
A LiDAR training sample feature be attached after feature vector, i=1,2..., M, M be training sample sum;
(2b) initializes a weight matrix and a bias vector, and weight matrix is multiplied with the feature vector after connecting, then
Tangent function calculating is added and carried out with bias vector, obtains the tangent vector e by tangent function operationi:
ei=tanh (xiW+b),
Wherein, w and b respectively indicates the weight matrix and bias vector of attention fused layer;
The tangent vector e that (2c) obtains (2b)iExponential function calculating is normalized, obtains normalized vector ai:
ai=softmax (ei)
Wherein, softmax indicates normalization exponential function;
(2d) will pass through the feature x of connection in (2a)iWith normalized vector a obtained in (2c)iTransposition be multiplied, obtain i-th
The fusion feature f of samplei=ai Txi, wherein T indicates transposition;
(2e) repeats (2a)-(2d) step to M training sample respectively, obtains the fusion feature of all samples, constitutes complete
Attention fused layer.
3. method according to claim 1, which is characterized in that in step (3b) design two-way be directed to respectively EO-1 hyperion sample and
Branch's convolutional neural networks of LiDAR sample, are accomplished by
(3b1) design is directed to branch's convolutional neural networks of EO-1 hyperion sample set, totally six layers of the network, is followed successively by the first convolution
- the first the-the second pond of the-the second convolutional layer of pond layer layer-third convolutional layer-third pond layer of layer, wherein each convolutional layer is right
EO-1 hyperion sample carries out feature extraction, after each pond layer carries out dimension transformation to the feature that its upper convolutional layer extracts,
Obtain bloom spectrum signature S;
(3b2) design is directed to branch's convolutional neural networks of LiDAR sample set, totally eight layers of the network, is followed successively by the 1st convolutional layer-
1st the-the 3 pond of pond layer the-the 1 interconnection layers the-the 2 convolutional layer the-the 2 pond layer the-the 3 convolutional layer of the-the 2 interconnection layers
Layer, wherein each convolutional layer carries out feature extraction to LiDAR sample, and each pond layer extracts its upper convolutional layer
After LiDAR feature carries out dimension transformation, obtain LiDAR feature L, interconnection layers by bloom spectrum signature S and LiDAR feature L into
Row connection, obtains LiDAR connection features F.
4. method according to claim 1, which is characterized in that the multi-class cross entropy loss function in step (4), expression
Formula is as follows:
Y (p, q)=- ∑ p (H, R) log (q (H, R)),
Wherein, H and R respectively indicates EO-1 hyperion training sample and LiDAR training sample, and p (H, R) indicates the true class of training sample
Not, q (H, R) indicates the prediction classification of training sample, and Y (p, q) is indicated between the true classification of training sample and prediction classification
Distance.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910224541.5A CN109993220B (en) | 2019-03-23 | 2019-03-23 | Multi-source remote sensing image classification method based on double-path attention fusion neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910224541.5A CN109993220B (en) | 2019-03-23 | 2019-03-23 | Multi-source remote sensing image classification method based on double-path attention fusion neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109993220A true CN109993220A (en) | 2019-07-09 |
CN109993220B CN109993220B (en) | 2022-12-06 |
Family
ID=67130950
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910224541.5A Active CN109993220B (en) | 2019-03-23 | 2019-03-23 | Multi-source remote sensing image classification method based on double-path attention fusion neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109993220B (en) |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110503191A (en) * | 2019-08-27 | 2019-11-26 | 四川博文讯通科技有限公司 | A kind of multilayer neural network model towards video analysis |
CN110516596A (en) * | 2019-08-27 | 2019-11-29 | 西安电子科技大学 | Empty spectrum attention hyperspectral image classification method based on Octave convolution |
CN110728224A (en) * | 2019-10-08 | 2020-01-24 | 西安电子科技大学 | Remote sensing image classification method based on attention mechanism depth Contourlet network |
CN110866494A (en) * | 2019-11-14 | 2020-03-06 | 三亚中科遥感研究所 | Optical remote sensing image-based town group extraction method and system |
CN111291622A (en) * | 2020-01-16 | 2020-06-16 | 武汉汉达瑞科技有限公司 | Method and device for detecting building change in remote sensing image |
CN111563520A (en) * | 2020-01-16 | 2020-08-21 | 北京航空航天大学 | Hyperspectral image classification method based on space-spectrum combined attention mechanism |
CN111666313A (en) * | 2020-05-25 | 2020-09-15 | 中科星图股份有限公司 | Correlation construction and multi-user data matching method based on multi-source heterogeneous remote sensing data |
CN111667489A (en) * | 2020-04-30 | 2020-09-15 | 华东师范大学 | Cancer hyperspectral image segmentation method and system based on double-branch attention deep learning |
CN111738052A (en) * | 2020-04-13 | 2020-10-02 | 上海海洋大学 | Multi-feature fusion hyperspectral remote sensing ground object classification method based on deep learning |
CN111898662A (en) * | 2020-07-20 | 2020-11-06 | 北京理工大学 | Coastal wetland deep learning classification method, device, equipment and storage medium |
CN112052758A (en) * | 2020-08-25 | 2020-12-08 | 西安电子科技大学 | Hyperspectral image classification method based on attention mechanism and recurrent neural network |
CN112116563A (en) * | 2020-08-28 | 2020-12-22 | 南京理工大学 | Hyperspectral image target detection method and system based on spectral dimension and space cooperation neighborhood attention |
CN112232343A (en) * | 2020-09-03 | 2021-01-15 | 国家粮食和物资储备局科学研究院 | Neural network and method for recognizing grain mildewed grains |
CN112560733A (en) * | 2020-12-23 | 2021-03-26 | 上海交通大学 | Multitasking system and method for two-stage remote sensing image |
CN112560967A (en) * | 2020-12-18 | 2021-03-26 | 西安电子科技大学 | Multi-source remote sensing image classification method, storage medium and computing device |
CN112861978A (en) * | 2021-02-20 | 2021-05-28 | 齐齐哈尔大学 | Multi-branch feature fusion remote sensing scene image classification method based on attention mechanism |
CN112949699A (en) * | 2021-02-09 | 2021-06-11 | 国家卫星气象中心(国家空间天气监测预警中心) | Remote sensing image classification model establishing and verifying method, system and electronic equipment |
CN113077051A (en) * | 2021-04-14 | 2021-07-06 | 广东博智林机器人有限公司 | Network model training method and device, text classification model and network model |
CN113095409A (en) * | 2021-04-13 | 2021-07-09 | 西安电子科技大学 | Hyperspectral image classification method based on attention mechanism and weight sharing |
CN113222044A (en) * | 2021-05-25 | 2021-08-06 | 合肥工业大学 | Cervical fluid-based cell classification method based on ternary attention and scale correlation fusion |
CN113283529A (en) * | 2021-06-08 | 2021-08-20 | 南通大学 | Neural network construction method for multi-modal image visibility detection |
CN113435253A (en) * | 2021-05-31 | 2021-09-24 | 西安电子科技大学 | Multi-source image combined urban area ground surface coverage classification method |
CN113496148A (en) * | 2020-03-19 | 2021-10-12 | 中科星图股份有限公司 | Multi-source data fusion method and system |
CN113627590A (en) * | 2021-07-29 | 2021-11-09 | 中汽创智科技有限公司 | Attention module and attention mechanism of convolutional neural network and convolutional neural network |
CN113705718A (en) * | 2021-09-06 | 2021-11-26 | 齐齐哈尔大学 | Remote sensing scene image classification method based on multi-level feature dense fusion |
CN113780147A (en) * | 2021-09-06 | 2021-12-10 | 西安电子科技大学 | Lightweight hyperspectral ground object classification method and system with dynamic fusion convolution network |
CN114119444A (en) * | 2021-11-29 | 2022-03-01 | 武汉大学 | Multi-source remote sensing image fusion method based on deep neural network |
CN114399684A (en) * | 2022-03-24 | 2022-04-26 | 中国科学院西安光学精密机械研究所 | Hyperspectral image openness classification method based on double loss functions |
CN114581773A (en) * | 2022-02-28 | 2022-06-03 | 西安电子科技大学 | Multi-mode remote sensing data classification method based on graph convolution network |
CN114972885A (en) * | 2022-06-17 | 2022-08-30 | 西安电子科技大学 | Multi-modal remote sensing image classification method based on model compression |
CN115331110A (en) * | 2022-08-26 | 2022-11-11 | 苏州大学 | Fusion classification method and device for remote sensing hyperspectral image and laser radar image |
CN115546569A (en) * | 2022-12-05 | 2022-12-30 | 鹏城实验室 | Attention mechanism-based data classification optimization method and related equipment |
CN116091875A (en) * | 2023-04-11 | 2023-05-09 | 合肥的卢深视科技有限公司 | Model training method, living body detection method, electronic device, and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160307073A1 (en) * | 2015-04-20 | 2016-10-20 | Los Alamos National Security, Llc | Change detection and change monitoring of natural and man-made features in multispectral and hyperspectral satellite imagery |
US20170076438A1 (en) * | 2015-08-31 | 2017-03-16 | Cape Analytics, Inc. | Systems and methods for analyzing remote sensing imagery |
CN106845418A (en) * | 2017-01-24 | 2017-06-13 | 北京航空航天大学 | A kind of hyperspectral image classification method based on deep learning |
CN107292343A (en) * | 2017-06-23 | 2017-10-24 | 中南大学 | A kind of Classification of hyperspectral remote sensing image method based on six layers of convolutional neural networks and spectral space information consolidation |
CN108460342A (en) * | 2018-02-05 | 2018-08-28 | 西安电子科技大学 | Hyperspectral image classification method based on convolution net and Recognition with Recurrent Neural Network |
CN109376804A (en) * | 2018-12-19 | 2019-02-22 | 中国地质大学(武汉) | Based on attention mechanism and convolutional neural networks Classification of hyperspectral remote sensing image method |
-
2019
- 2019-03-23 CN CN201910224541.5A patent/CN109993220B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160307073A1 (en) * | 2015-04-20 | 2016-10-20 | Los Alamos National Security, Llc | Change detection and change monitoring of natural and man-made features in multispectral and hyperspectral satellite imagery |
US20170076438A1 (en) * | 2015-08-31 | 2017-03-16 | Cape Analytics, Inc. | Systems and methods for analyzing remote sensing imagery |
CN106845418A (en) * | 2017-01-24 | 2017-06-13 | 北京航空航天大学 | A kind of hyperspectral image classification method based on deep learning |
CN107292343A (en) * | 2017-06-23 | 2017-10-24 | 中南大学 | A kind of Classification of hyperspectral remote sensing image method based on six layers of convolutional neural networks and spectral space information consolidation |
CN108460342A (en) * | 2018-02-05 | 2018-08-28 | 西安电子科技大学 | Hyperspectral image classification method based on convolution net and Recognition with Recurrent Neural Network |
CN109376804A (en) * | 2018-12-19 | 2019-02-22 | 中国地质大学(武汉) | Based on attention mechanism and convolutional neural networks Classification of hyperspectral remote sensing image method |
Cited By (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110516596B (en) * | 2019-08-27 | 2023-03-24 | 西安电子科技大学 | Octave convolution-based spatial spectrum attention hyperspectral image classification method |
CN110516596A (en) * | 2019-08-27 | 2019-11-29 | 西安电子科技大学 | Empty spectrum attention hyperspectral image classification method based on Octave convolution |
CN110503191A (en) * | 2019-08-27 | 2019-11-26 | 四川博文讯通科技有限公司 | A kind of multilayer neural network model towards video analysis |
CN110728224A (en) * | 2019-10-08 | 2020-01-24 | 西安电子科技大学 | Remote sensing image classification method based on attention mechanism depth Contourlet network |
CN110728224B (en) * | 2019-10-08 | 2022-03-11 | 西安电子科技大学 | Remote sensing image classification method based on attention mechanism depth Contourlet network |
CN110866494A (en) * | 2019-11-14 | 2020-03-06 | 三亚中科遥感研究所 | Optical remote sensing image-based town group extraction method and system |
CN110866494B (en) * | 2019-11-14 | 2022-09-06 | 三亚中科遥感研究所 | Urban group extraction method and system based on optical remote sensing image |
CN111563520B (en) * | 2020-01-16 | 2023-01-13 | 北京航空航天大学 | Hyperspectral image classification method based on space-spectrum combined attention mechanism |
CN111291622B (en) * | 2020-01-16 | 2023-07-11 | 武汉汉达瑞科技有限公司 | Method and device for detecting building change in remote sensing image |
CN111563520A (en) * | 2020-01-16 | 2020-08-21 | 北京航空航天大学 | Hyperspectral image classification method based on space-spectrum combined attention mechanism |
CN111291622A (en) * | 2020-01-16 | 2020-06-16 | 武汉汉达瑞科技有限公司 | Method and device for detecting building change in remote sensing image |
CN113496148A (en) * | 2020-03-19 | 2021-10-12 | 中科星图股份有限公司 | Multi-source data fusion method and system |
CN111738052A (en) * | 2020-04-13 | 2020-10-02 | 上海海洋大学 | Multi-feature fusion hyperspectral remote sensing ground object classification method based on deep learning |
CN111738052B (en) * | 2020-04-13 | 2024-03-12 | 上海海洋大学 | Multi-feature fusion hyperspectral remote sensing ground object classification method based on deep learning |
CN111667489A (en) * | 2020-04-30 | 2020-09-15 | 华东师范大学 | Cancer hyperspectral image segmentation method and system based on double-branch attention deep learning |
CN111667489B (en) * | 2020-04-30 | 2022-04-05 | 华东师范大学 | Cancer hyperspectral image segmentation method and system based on double-branch attention deep learning |
CN111666313B (en) * | 2020-05-25 | 2023-02-07 | 中科星图股份有限公司 | Correlation construction and multi-user data matching method based on multi-source heterogeneous remote sensing data |
CN111666313A (en) * | 2020-05-25 | 2020-09-15 | 中科星图股份有限公司 | Correlation construction and multi-user data matching method based on multi-source heterogeneous remote sensing data |
CN111898662A (en) * | 2020-07-20 | 2020-11-06 | 北京理工大学 | Coastal wetland deep learning classification method, device, equipment and storage medium |
CN111898662B (en) * | 2020-07-20 | 2023-01-06 | 北京理工大学 | Coastal wetland deep learning classification method, device, equipment and storage medium |
CN112052758A (en) * | 2020-08-25 | 2020-12-08 | 西安电子科技大学 | Hyperspectral image classification method based on attention mechanism and recurrent neural network |
CN112052758B (en) * | 2020-08-25 | 2023-05-23 | 西安电子科技大学 | Hyperspectral image classification method based on attention mechanism and cyclic neural network |
CN112116563A (en) * | 2020-08-28 | 2020-12-22 | 南京理工大学 | Hyperspectral image target detection method and system based on spectral dimension and space cooperation neighborhood attention |
CN112232343A (en) * | 2020-09-03 | 2021-01-15 | 国家粮食和物资储备局科学研究院 | Neural network and method for recognizing grain mildewed grains |
CN112232343B (en) * | 2020-09-03 | 2023-11-21 | 国家粮食和物资储备局科学研究院 | Grain mildew grain identification neural network and identification method |
CN112560967A (en) * | 2020-12-18 | 2021-03-26 | 西安电子科技大学 | Multi-source remote sensing image classification method, storage medium and computing device |
CN112560967B (en) * | 2020-12-18 | 2023-09-15 | 西安电子科技大学 | Multi-source remote sensing image classification method, storage medium and computing device |
CN112560733A (en) * | 2020-12-23 | 2021-03-26 | 上海交通大学 | Multitasking system and method for two-stage remote sensing image |
CN112949699A (en) * | 2021-02-09 | 2021-06-11 | 国家卫星气象中心(国家空间天气监测预警中心) | Remote sensing image classification model establishing and verifying method, system and electronic equipment |
CN112949699B (en) * | 2021-02-09 | 2024-05-14 | 国家卫星气象中心(国家空间天气监测预警中心) | Remote sensing image classification model building and verifying method and system and electronic equipment |
CN112861978A (en) * | 2021-02-20 | 2021-05-28 | 齐齐哈尔大学 | Multi-branch feature fusion remote sensing scene image classification method based on attention mechanism |
CN113095409A (en) * | 2021-04-13 | 2021-07-09 | 西安电子科技大学 | Hyperspectral image classification method based on attention mechanism and weight sharing |
CN113077051A (en) * | 2021-04-14 | 2021-07-06 | 广东博智林机器人有限公司 | Network model training method and device, text classification model and network model |
CN113077051B (en) * | 2021-04-14 | 2024-01-26 | 南京零洞数智科技有限公司 | Network model training method and device, text classification model and network model |
CN113222044B (en) * | 2021-05-25 | 2022-03-08 | 合肥工业大学 | Cervical fluid-based cell classification method based on ternary attention and scale correlation fusion |
CN113222044A (en) * | 2021-05-25 | 2021-08-06 | 合肥工业大学 | Cervical fluid-based cell classification method based on ternary attention and scale correlation fusion |
CN113435253B (en) * | 2021-05-31 | 2022-12-02 | 西安电子科技大学 | Multi-source image combined urban area ground surface coverage classification method |
CN113435253A (en) * | 2021-05-31 | 2021-09-24 | 西安电子科技大学 | Multi-source image combined urban area ground surface coverage classification method |
CN113283529B (en) * | 2021-06-08 | 2022-09-06 | 南通大学 | Neural network construction method for multi-modal image visibility detection |
CN113283529A (en) * | 2021-06-08 | 2021-08-20 | 南通大学 | Neural network construction method for multi-modal image visibility detection |
CN113627590A (en) * | 2021-07-29 | 2021-11-09 | 中汽创智科技有限公司 | Attention module and attention mechanism of convolutional neural network and convolutional neural network |
CN113705718A (en) * | 2021-09-06 | 2021-11-26 | 齐齐哈尔大学 | Remote sensing scene image classification method based on multi-level feature dense fusion |
CN113705718B (en) * | 2021-09-06 | 2024-04-02 | 齐齐哈尔大学 | Remote sensing scene image classification method based on multi-level feature dense fusion |
CN113780147A (en) * | 2021-09-06 | 2021-12-10 | 西安电子科技大学 | Lightweight hyperspectral ground object classification method and system with dynamic fusion convolution network |
CN114119444B (en) * | 2021-11-29 | 2024-04-16 | 武汉大学 | Multi-source remote sensing image fusion method based on deep neural network |
CN114119444A (en) * | 2021-11-29 | 2022-03-01 | 武汉大学 | Multi-source remote sensing image fusion method based on deep neural network |
CN114581773A (en) * | 2022-02-28 | 2022-06-03 | 西安电子科技大学 | Multi-mode remote sensing data classification method based on graph convolution network |
CN114399684B (en) * | 2022-03-24 | 2022-08-05 | 中国科学院西安光学精密机械研究所 | Hyperspectral image openness classification method based on double loss functions |
CN114399684A (en) * | 2022-03-24 | 2022-04-26 | 中国科学院西安光学精密机械研究所 | Hyperspectral image openness classification method based on double loss functions |
CN114972885A (en) * | 2022-06-17 | 2022-08-30 | 西安电子科技大学 | Multi-modal remote sensing image classification method based on model compression |
CN114972885B (en) * | 2022-06-17 | 2024-06-07 | 西安电子科技大学 | Multi-mode remote sensing image classification method based on model compression |
WO2024040828A1 (en) * | 2022-08-26 | 2024-02-29 | 苏州大学 | Method and device for fusion and classification of remote sensing hyperspectral image and laser radar image |
CN115331110A (en) * | 2022-08-26 | 2022-11-11 | 苏州大学 | Fusion classification method and device for remote sensing hyperspectral image and laser radar image |
CN115331110B (en) * | 2022-08-26 | 2024-10-18 | 苏州大学 | Fusion classification method and device for remote sensing hyperspectral image and laser radar image |
CN115546569A (en) * | 2022-12-05 | 2022-12-30 | 鹏城实验室 | Attention mechanism-based data classification optimization method and related equipment |
CN116091875B (en) * | 2023-04-11 | 2023-08-29 | 合肥的卢深视科技有限公司 | Model training method, living body detection method, electronic device, and storage medium |
CN116091875A (en) * | 2023-04-11 | 2023-05-09 | 合肥的卢深视科技有限公司 | Model training method, living body detection method, electronic device, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109993220B (en) | 2022-12-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109993220A (en) | Multi-source Remote Sensing Images Classification method based on two-way attention fused neural network | |
Roy et al. | Spectral–spatial morphological attention transformer for hyperspectral image classification | |
Zhu et al. | A spectral-spatial-dependent global learning framework for insufficient and imbalanced hyperspectral image classification | |
CN107220657B (en) | A kind of method of high-resolution remote sensing image scene classification towards small data set | |
CN113449680B (en) | Knowledge distillation-based multimode small target detection method | |
CN110321963B (en) | Hyperspectral image classification method based on fusion of multi-scale and multi-dimensional space spectrum features | |
He et al. | Multi-scale 3D deep convolutional neural network for hyperspectral image classification | |
CN107145830B (en) | Hyperspectral image classification method based on spatial information enhancing and deepness belief network | |
CN109359559B (en) | Pedestrian re-identification method based on dynamic shielding sample | |
CN110728192B (en) | High-resolution remote sensing image classification method based on novel characteristic pyramid depth network | |
CN108549893A (en) | A kind of end-to-end recognition methods of the scene text of arbitrary shape | |
CN107145836B (en) | Hyperspectral image classification method based on stacked boundary identification self-encoder | |
CN110084159A (en) | Hyperspectral image classification method based on the multistage empty spectrum information CNN of joint | |
CN108399362A (en) | A kind of rapid pedestrian detection method and device | |
CN106845418A (en) | A kind of hyperspectral image classification method based on deep learning | |
CN108090447A (en) | Hyperspectral image classification method and device under double-branch deep structure | |
CN111353531B (en) | Hyperspectral image classification method based on singular value decomposition and spatial spectral domain attention mechanism | |
CN108846426A (en) | Polarization SAR classification method based on the twin network of the two-way LSTM of depth | |
CN107025440A (en) | A kind of remote sensing images method for extracting roads based on new convolutional neural networks | |
CN108734719A (en) | Background automatic division method before a kind of lepidopterous insects image based on full convolutional neural networks | |
CN109344698A (en) | EO-1 hyperion band selection method based on separable convolution sum hard threshold function | |
CN112347888A (en) | Remote sensing image scene classification method based on bidirectional feature iterative fusion | |
CN107239759A (en) | A kind of Hi-spatial resolution remote sensing image transfer learning method based on depth characteristic | |
Zhao et al. | Local and long-range collaborative learning for remote sensing scene classification | |
CN113256649B (en) | Remote sensing image station selection and line selection semantic segmentation method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |