CN113128586B - Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image - Google Patents
Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image Download PDFInfo
- Publication number
- CN113128586B CN113128586B CN202110412317.6A CN202110412317A CN113128586B CN 113128586 B CN113128586 B CN 113128586B CN 202110412317 A CN202110412317 A CN 202110412317A CN 113128586 B CN113128586 B CN 113128586B
- Authority
- CN
- China
- Prior art keywords
- resolution
- images
- image
- time
- convolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000007246 mechanism Effects 0.000 title claims abstract description 23
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 18
- 230000004927 fusion Effects 0.000 claims abstract description 29
- 238000012549 training Methods 0.000 claims abstract description 27
- 238000013507 mapping Methods 0.000 claims abstract description 17
- 230000007704 transition Effects 0.000 claims abstract description 16
- 238000000034 method Methods 0.000 claims abstract description 15
- 230000008447 perception Effects 0.000 claims abstract description 8
- 230000006870 function Effects 0.000 claims description 12
- 230000002123 temporal effect Effects 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 7
- 238000010586 diagram Methods 0.000 claims description 6
- 230000009467 reduction Effects 0.000 claims description 4
- 238000007499 fusion processing Methods 0.000 claims description 2
- 238000001228 spectrum Methods 0.000 abstract 1
- 230000000694 effects Effects 0.000 description 8
- 238000013527 convolutional neural network Methods 0.000 description 6
- 230000009286 beneficial effect Effects 0.000 description 5
- 230000003595 spectral effect Effects 0.000 description 4
- 230000014759 maintenance of location Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 229910010293 ceramic material Inorganic materials 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The invention discloses a space-time fusion method based on a multi-scale mechanism and a series expansion convolution remote sensing image, which comprises the following steps: s1, inputting the three images with high time resolution and low spatial resolution into a mapping network, and extracting features through multi-scale perception and series expansion convolution in the mapping convolution network to obtain three transition images with similar resolution to the images with high time resolution and low time resolution; s2, inputting the transition image and the image with high space and low time resolution into the reconstructed difference image, and obtaining two difference images with high space and low time resolution through the collaborative training of multiple networks; s3, the two differential images and the two images with high space and low time resolution are weighted and fused, and a high space and high time resolution image is obtained through reconstruction. The method improves the accuracy of the remote sensing image space-time algorithm, and solves the problem that the reconstruction fusion result high-frequency space detail and the spectrum information fusion are inaccurate in the traditional remote sensing space-time algorithm.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a multi-scale mechanism and series expansion convolution remote sensing image space-time fusion method based on multi-network collaborative training.
Background
The spatial-temporal fusion algorithm of the remote sensing images belongs to the field of remote sensing image fusion, and is widely applied to the fields of farmland monitoring, disaster prediction and the like. The time-space fusion of the remote sensing images aims to solve the contradiction of the remote sensing images on time and space resolution, and the remote sensing images with high time and high space resolution can be obtained through the time-space fusion. Existing remote sensing image space-time fusion algorithms can be divided into five major categories, namely, weighted function-based algorithms (Weight function-based), Bayesian-based algorithms (Bayesian-based), Unmixing-based algorithms (Unmixing-based), Hybrid algorithms (Hybrid) and Learning-based algorithms (Learning-based).
A space-time Adaptive reflection Fusion model (STARFM) is the most representative algorithm based on a weighting function, and many of the following space-time Fusion algorithms are proposed based on STARFM. Algorithms based on bayesian and unmixing are also gradually diversified later, and besides using a single kind of algorithm, there are some methods using a mixed algorithm, such as Flexible spatial temporal Data Fusion (FSDAF). In recent years, learning-based algorithms are vigorously developed, and they can be further classified into a spatiotemporal fusion algorithm based on dictionary pair learning and a spatiotemporal fusion algorithm based on machine learning. A space-time Fusion Model (SPSTFM) based on Sparse representation opens the beginning of a space-time Fusion algorithm based on dictionary pair learning, and has better processing capability on regions with higher heterogeneity. The space-time Fusion algorithm (STFDCNN) based on the Convolutional Neural network further improves the Fusion precision, which proves the applicability of the Convolutional Neural network in the space-time Fusion field, and then the space-time Fusion method based on the Convolutional Neural network is endless.
Although existing spatiotemporal fusion methods are diverse, many problems still exist, such as: in the regions with higher heterogeneity, the fusion precision of the algorithm is not high; fused images obtained by an algorithm based on a convolutional neural network are usually too smooth; the retention effect of the spectral information is not good. Multi-scale mechanisms and tandem dilation convolution mechanisms are commonly used in the field of video frame super-resolution, and few spatio-temporal fusion methods are introduced. The multi-scale mechanism can fully extract the characteristic information from a plurality of scale perception characteristic graphs, and is beneficial to solving the problem of poor retention effect of the spectral information of the common space-time fusion method; the series expansion convolution mechanism can extract the edge information of the characteristic diagram, and is beneficial to solving the problems that the fused image is too smooth and the spatial detail is seriously lost in the common space-time fusion method. The existing space-time fusion algorithm does not use a mechanism of excessive network collaborative training, in a multi-network model, the network training effect is influenced by a plurality of network output results, so that the network convergence effect is possibly poor.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. A space-time fusion method based on a multi-scale mechanism and a series expansion convolution remote sensing image is provided. The technical scheme of the invention is as follows:
a space-time fusion method based on a multi-scale mechanism and a series expansion convolution remote sensing image comprises the following steps:
s1, combining three images C with high time and low space resolution i Inputting the image into a mapping convolution network, wherein i is 1, 2 and 3, and extracting characteristics through multi-scale perception and series expansion convolution in the mapping convolution network to obtain three transition images with similar resolution to the images with high space and low time resolution;
s2, inputting the transition image and the images with high spatial resolution and low temporal resolution into the reconstructed difference image, and obtaining two difference images with high spatial resolution and low temporal resolution through the multi-network collaborative training;
s3, performing weighted fusion on the two high-spatial resolution difference images and the two high-spatial and low-temporal resolution images, and reconstructing to obtain a high-spatial and high-temporal resolution image F 1 。
Further, the mapping convolution network of step S1 is composed of a convolution layer, a multi-scale sensing module, and a series expansion convolution module, where the multi-scale sensing module is used to respectively sense multiple scales of the input feature map, and then superimposes the feature map into a new multi-dimensional feature map; the series expansion convolution module can extract richer characteristic information of the image by expanding the receptive field of the convolution layer, and the process of obtaining the image with the transition resolution ratio is as follows:
wherein T is i Image representing a transitional resolution, M 0 Mapping function, phi, representing a mapped convolutional network 0 A training weight parameter representing the mapping function.
Further, the step S1 specifically includes the following sub-steps,
s1.1, putting the input high-time and low-spatial-resolution images at three moments into a multi-scale sensing module to obtain a characteristic diagram under multi-scale sensing of the images;
s1.2, inputting the multi-scale sensing characteristic diagram into a series expansion convolution to obtain a dimension reduction characteristic diagram;
s1.3, converting the dimension reduction characteristic graphs obtained from the three images with low spatial resolution into three images T with transitional resolution through convolution operation i (i=1,2,3)。
Further, the reconstructed differential convolutional network and the collaborative training convolutional network of step S2 are respectively composed of eight layers of basic convolutional layers and six layers of basic convolutional layers, where the task of reconstructing the differential convolutional network is more complex, so the network setup is also two layers of basic convolutional layers more than that of the collaborative training convolutional network. Assisting in reconstructing a differential convolution network to complete training according to time correlation through the output of the cooperative training convolution network, and outputting two high-spatial-resolution differential images F T01 And F T12 The process is as follows:
T ij =T i -T j ,
wherein T is ij Representing the transition resolution image T at the i-th moment i Transition resolution image T with j time j Difference image therebetween, F 0 And F 2 Respectively represent 0 thTemporal and 2 nd temporal high spatial, low temporal resolution images, M 1 Mapping function, phi, representing a reconstructed sealed convolutional network 1 Represents the mapping function M 1 Training weight parameters of (1).
Further, the step S2 specifically includes the following sub-steps,
s2.1, inputting the three images with the transition resolution and the two images with high space and low time resolution into a reconstruction difference convolution network, and obtaining two differential images F with high space resolution according to the structural correlation of a time sequence T01 And F T12 ;
S2.2, performing collaborative training by adopting known information according to the time correlation of the time sequence, and enabling two known high-space low-time resolution images F 0 And F 2 Outputting a high-resolution differential image F in an input cooperative training convolutional network T02 Using high resolution differential images F T02 Helping to reconstruct a differential convolution network to complete training;
s2.3, obtaining two high-spatial-resolution difference images F through the trained reconstruction difference images T01 And F T12 。
Further, the step S3 obtains an image F with high spatial and temporal resolution through weighted fusion reconstruction 1 The fusion process is as follows:
wherein ω is 0 And ω 2 Respectively as F 0 And F 2 Combining the result pair obtained after the high spatial resolution difference image to finally fuse the reconstruction result F 1 The contribution weight of (1).
Two weight parameters are calculated:
C ij =C i -C j ,
wherein C is ij Representing a high temporal, low spatial resolution image C at time i i And j time high time, low spatial resolution image C j Difference image between v C01 And v C12 Respectively represent C 01 And C 12 K is a set constant, so that the condition that the denominator is 0 is avoided.
The invention has the following advantages and beneficial effects:
the invention is based on a convolutional neural network, and uses a cooperative working mechanism of a plurality of networks, a multi-scale mechanism and a series expansion convolution mechanism. The multi-scale mechanism and the tandem expansion convolution mechanism are commonly used in the field of video frame super-resolution, and few spatio-temporal fusion methods are cited. The multi-scale mechanism can fully extract the characteristic information from a plurality of scale perception characteristic graphs, and is beneficial to solving the problem of poor retention effect of the spectral information of the common space-time fusion method; the series expansion convolution mechanism can extract the edge information of the characteristic diagram, and is beneficial to solving the problems that the fused image is too smooth and the spatial detail is seriously lost in the common space-time fusion method. The existing space-time fusion algorithm does not use a mechanism of excessive network collaborative training, in a multi-network model, the network training effect is influenced by a plurality of network output results, so that the network convergence effect is possibly poor. Through the special network, a fusion result with higher accuracy can be obtained, and in the invention, space-time fusion is carried out by using two pairs of images, so that more known information can be fully utilized, and a better reconstruction fusion effect can be obtained.
Drawings
FIG. 1 is a flow chart of a spatial-temporal fusion method based on a multi-scale mechanism and a series expansion convolution remote sensing image in a preferred embodiment;
figure 2 is a graph comparing results with other mainstream algorithms. (a) A reference image; (b) STARFM; (c) ESTARFM; (d) FSDAF; (e) StfNet; (f) DCSTFN; (g) EDCSTFN; (h) the invention relates to a method for preparing a high-temperature-resistant ceramic material.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
FIG. 1 is a flow chart of a spatiotemporal fusion method based on a multi-scale mechanism and a series expansion convolution remote sensing image according to a preferred embodiment of the invention;
the method comprises the following specific steps:
s1, inputting the three images with high time and low spatial resolution into a mapping network, and extracting features through multi-scale perception and series expansion convolution to obtain three transition images with similar resolution to the images with high time and low spatial resolution;
s2, inputting the transition image and the image with high space and low time resolution into the reconstructed difference image, and obtaining two difference images with high space and low time resolution through the collaborative training of multiple networks;
s3, the two differential images and the two images with high space and low time resolution are weighted and fused, and a high space and high time resolution image is obtained through reconstruction.
In order to evaluate the performance of the invention, a classical data set is selected for experiment, and the experimental result is compared with other seven classical space-time fusion algorithms. Where STARFM and ESATRFM are weighted function based algorithms, FSDAF is a hybrid algorithm, StfNet, DCSTFN, edctfn and the present invention convolutional neural network based algorithms.
Fig. 2 shows the experimental results of each method, and it can be clearly seen that the result image of the invention greatly alleviates the problem of the image being too smooth compared with other algorithms. And the STARFM results show severe spectral distortion while the FSDAF results show loss of detail, compared to the fusion results of the present algorithm which are closer to the reference image.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.
Claims (2)
1. A space-time fusion method based on a multi-scale mechanism and a series expansion convolution remote sensing image is characterized by comprising the following steps:
s1, combining three images C with high time and low space resolution i Inputting the image into a mapping convolution network, wherein i is 1, 2 and 3, and extracting features through multi-scale perception and series expansion convolution in the mapping convolution network to obtain three transition images with similar resolution to the images with high spatial resolution and low temporal resolution;
s2, inputting the transition image and the images with high spatial resolution and low temporal resolution into the reconstructed difference image, and obtaining two difference images with high spatial resolution and low temporal resolution through multi-network collaborative training;
s3, performing weighted fusion on the two high-spatial resolution difference images and the two high-spatial and low-temporal resolution images, and reconstructing to obtain a high-spatial and high-temporal resolution image F 1 ;
The step S1 specifically includes the following sub-steps,
s1.1, putting the input high-time and low-spatial-resolution images at three moments into a multi-scale sensing module to obtain a characteristic diagram under multi-scale sensing of the images;
s1.2, inputting the multi-scale perceived feature map into a series expansion convolution to obtain a dimension reduction feature map;
s1.3, converting the dimension reduction characteristic graphs obtained from the three images with low spatial resolution into three images T with transitional resolution through convolution operation i (i=1,2,3);
The reconstructed differential convolutional network and the collaborative training convolutional network of the step S2 are respectively composed of eight layers of basic convolutional layers and six layers of basic convolutional layers, wherein the task of reconstructing the differential convolutional network is more complicated, so the network setup is also two layers of basic convolutional layers more than that of the collaborative training convolutional network, the reconstructed differential convolutional network is helped to complete training according to time correlation through the output of the collaborative training convolutional network, and two high spatial resolution differential images F are output T01 And F T12 The process is as follows:
T ij =T i -T j ,
wherein T is ij Representing the transition resolution image T at the i-th moment i Transition resolution image T with j time j Difference image therebetween, F 0 And F 2 High spatial, low temporal resolution images, M, representing respectively the 0 th and 2 nd moments 1 Mapping function, phi, representing a reconstructed differential convolutional network 1 Represents the mapping function M 1 Training weight parameters of (1);
the step S2 specifically includes the following sub-steps,
s2.1, inputting three images with transition resolution and two images with high space and low time resolutionObtaining two high-spatial resolution difference images F according to the structural correlation of the time sequence in the reconstruction difference convolution network T01 And F T12 ;
S2.2, performing collaborative training by adopting known information according to the time correlation of the time sequence, and enabling two known high-space low-time resolution images F 0 And F 2 Outputting a high-resolution differential image F in an input cooperative training convolutional network T02 Using high resolution differential images F T02 Helping to reconstruct a differential convolution network to complete training;
s2.3, obtaining two high-spatial-resolution difference images F through the trained reconstruction difference images T01 And F T12 ;
Step S3 obtains an image F with high space and high time resolution through weighted fusion reconstruction 1 The fusion process is as follows:
F 1 =ω 0 ·(F 0 +F T01 )+ω 2 ·(F 2 +F T12 ),
wherein ω is 0 And ω 2 Respectively as F 0 And F 2 Combining the result pair obtained after the high spatial resolution difference image to finally fuse the reconstruction result F 1 The contribution weight of (c);
two weight parameters are calculated:
C ij =C i -C j ,
wherein C is ij Representing a high temporal, low spatial resolution image C at time i i And j time high time, low spatial resolution image C j Difference image between v C01 And v C12 Respectively represent C 01 And C 12 K is a set constant, so that the condition that the denominator is 0 is avoided.
2. The spatio-temporal fusion method based on the multi-scale mechanism and the series expansion convolution remote sensing image as claimed in claim 1, wherein the mapping convolution network of the step S1 is composed of convolution layers, a multi-scale perception module and series expansion convolution modules, wherein the multi-scale perception module is used for respectively perceiving a plurality of scales of the input feature map and then superposing the feature map into a new multi-dimensional feature map; the series expansion convolution module can extract richer characteristic information of the image by expanding the convolution layer receptive field, and the process of obtaining the image with the transition resolution ratio is as follows:
wherein T is i Image representing a transitional resolution, M 0 Mapping function, phi, representing a mapped convolutional network 0 A training weight parameter representing the mapping function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110412317.6A CN113128586B (en) | 2021-04-16 | 2021-04-16 | Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110412317.6A CN113128586B (en) | 2021-04-16 | 2021-04-16 | Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113128586A CN113128586A (en) | 2021-07-16 |
CN113128586B true CN113128586B (en) | 2022-08-23 |
Family
ID=76777414
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110412317.6A Active CN113128586B (en) | 2021-04-16 | 2021-04-16 | Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113128586B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116310883B (en) * | 2023-05-17 | 2023-10-20 | 山东建筑大学 | Agricultural disaster prediction method based on remote sensing image space-time fusion and related equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110263732A (en) * | 2019-06-24 | 2019-09-20 | 京东方科技集团股份有限公司 | Multiscale target detection method and device |
CN110705457A (en) * | 2019-09-29 | 2020-01-17 | 核工业北京地质研究院 | Remote sensing image building change detection method |
CN111080567A (en) * | 2019-12-12 | 2020-04-28 | 长沙理工大学 | Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network |
CN111754404A (en) * | 2020-06-18 | 2020-10-09 | 重庆邮电大学 | Remote sensing image space-time fusion method based on multi-scale mechanism and attention mechanism |
CN112184554A (en) * | 2020-10-13 | 2021-01-05 | 重庆邮电大学 | Remote sensing image fusion method based on residual mixed expansion convolution |
CN112329685A (en) * | 2020-11-16 | 2021-02-05 | 常州大学 | Method for detecting crowd abnormal behaviors through fusion type convolutional neural network |
CN112529828A (en) * | 2020-12-25 | 2021-03-19 | 西北大学 | Reference data non-sensitive remote sensing image space-time fusion model construction method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8411938B2 (en) * | 2007-11-29 | 2013-04-02 | Sri International | Multi-scale multi-camera adaptive fusion with contrast normalization |
-
2021
- 2021-04-16 CN CN202110412317.6A patent/CN113128586B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110263732A (en) * | 2019-06-24 | 2019-09-20 | 京东方科技集团股份有限公司 | Multiscale target detection method and device |
CN110705457A (en) * | 2019-09-29 | 2020-01-17 | 核工业北京地质研究院 | Remote sensing image building change detection method |
CN111080567A (en) * | 2019-12-12 | 2020-04-28 | 长沙理工大学 | Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network |
CN111754404A (en) * | 2020-06-18 | 2020-10-09 | 重庆邮电大学 | Remote sensing image space-time fusion method based on multi-scale mechanism and attention mechanism |
CN112184554A (en) * | 2020-10-13 | 2021-01-05 | 重庆邮电大学 | Remote sensing image fusion method based on residual mixed expansion convolution |
CN112329685A (en) * | 2020-11-16 | 2021-02-05 | 常州大学 | Method for detecting crowd abnormal behaviors through fusion type convolutional neural network |
CN112529828A (en) * | 2020-12-25 | 2021-03-19 | 西北大学 | Reference data non-sensitive remote sensing image space-time fusion model construction method |
Non-Patent Citations (2)
Title |
---|
DMNet: A Network Architecture Using Dilated Convolution and Multiscale Mechanisms for Spatiotemporal Fusion of Remote Sensing Images;Weisheng Li;《IEEE Sensors Journal》;20200605;全文 * |
条件生成对抗遥感图像时空融合;李昌洁;《中国图象图形学报》;20210331;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113128586A (en) | 2021-07-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106910161B (en) | Single image super-resolution reconstruction method based on deep convolutional neural network | |
CN112734646B (en) | Image super-resolution reconstruction method based on feature channel division | |
CN109191382B (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN111275618B (en) | Depth map super-resolution reconstruction network construction method based on double-branch perception | |
CN113177882B (en) | Single-frame image super-resolution processing method based on diffusion model | |
CN112819910B (en) | Hyperspectral image reconstruction method based on double-ghost attention machine mechanism network | |
CN109685716B (en) | Image super-resolution reconstruction method for generating countermeasure network based on Gaussian coding feedback | |
CN111815516B (en) | Super-resolution reconstruction method for weak supervision infrared remote sensing image | |
CN113240683B (en) | Attention mechanism-based lightweight semantic segmentation model construction method | |
CN112561799A (en) | Infrared image super-resolution reconstruction method | |
CN112950480A (en) | Super-resolution reconstruction method integrating multiple receptive fields and dense residual attention | |
CN111861886B (en) | Image super-resolution reconstruction method based on multi-scale feedback network | |
CN115760814A (en) | Remote sensing image fusion method and system based on double-coupling deep neural network | |
CN116486074A (en) | Medical image segmentation method based on local and global context information coding | |
CN117576402B (en) | Deep learning-based multi-scale aggregation transducer remote sensing image semantic segmentation method | |
CN117474781A (en) | High spectrum and multispectral image fusion method based on attention mechanism | |
CN114418850A (en) | Super-resolution reconstruction method with reference image and fusion image convolution | |
CN113139974B (en) | Focus segmentation model training and application method based on semi-supervised learning | |
CN113139904A (en) | Image blind super-resolution method and system | |
CN113128586B (en) | Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image | |
CN109615576A (en) | The single-frame image super-resolution reconstruction method of base study is returned based on cascade | |
CN113888399B (en) | Face age synthesis method based on style fusion and domain selection structure | |
CN111626296A (en) | Medical image segmentation system, method and terminal based on deep neural network | |
CN117576483B (en) | Multisource data fusion ground object classification method based on multiscale convolution self-encoder | |
CN112767277A (en) | Depth feature sequencing deblurring method based on reference image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |