CN113128586B - Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image - Google Patents

Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image Download PDF

Info

Publication number
CN113128586B
CN113128586B CN202110412317.6A CN202110412317A CN113128586B CN 113128586 B CN113128586 B CN 113128586B CN 202110412317 A CN202110412317 A CN 202110412317A CN 113128586 B CN113128586 B CN 113128586B
Authority
CN
China
Prior art keywords
resolution
images
image
time
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110412317.6A
Other languages
Chinese (zh)
Other versions
CN113128586A (en
Inventor
李伟生
杨超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202110412317.6A priority Critical patent/CN113128586B/en
Publication of CN113128586A publication Critical patent/CN113128586A/en
Application granted granted Critical
Publication of CN113128586B publication Critical patent/CN113128586B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a space-time fusion method based on a multi-scale mechanism and a series expansion convolution remote sensing image, which comprises the following steps: s1, inputting the three images with high time resolution and low spatial resolution into a mapping network, and extracting features through multi-scale perception and series expansion convolution in the mapping convolution network to obtain three transition images with similar resolution to the images with high time resolution and low time resolution; s2, inputting the transition image and the image with high space and low time resolution into the reconstructed difference image, and obtaining two difference images with high space and low time resolution through the collaborative training of multiple networks; s3, the two differential images and the two images with high space and low time resolution are weighted and fused, and a high space and high time resolution image is obtained through reconstruction. The method improves the accuracy of the remote sensing image space-time algorithm, and solves the problem that the reconstruction fusion result high-frequency space detail and the spectrum information fusion are inaccurate in the traditional remote sensing space-time algorithm.

Description

Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a multi-scale mechanism and series expansion convolution remote sensing image space-time fusion method based on multi-network collaborative training.
Background
The spatial-temporal fusion algorithm of the remote sensing images belongs to the field of remote sensing image fusion, and is widely applied to the fields of farmland monitoring, disaster prediction and the like. The time-space fusion of the remote sensing images aims to solve the contradiction of the remote sensing images on time and space resolution, and the remote sensing images with high time and high space resolution can be obtained through the time-space fusion. Existing remote sensing image space-time fusion algorithms can be divided into five major categories, namely, weighted function-based algorithms (Weight function-based), Bayesian-based algorithms (Bayesian-based), Unmixing-based algorithms (Unmixing-based), Hybrid algorithms (Hybrid) and Learning-based algorithms (Learning-based).
A space-time Adaptive reflection Fusion model (STARFM) is the most representative algorithm based on a weighting function, and many of the following space-time Fusion algorithms are proposed based on STARFM. Algorithms based on bayesian and unmixing are also gradually diversified later, and besides using a single kind of algorithm, there are some methods using a mixed algorithm, such as Flexible spatial temporal Data Fusion (FSDAF). In recent years, learning-based algorithms are vigorously developed, and they can be further classified into a spatiotemporal fusion algorithm based on dictionary pair learning and a spatiotemporal fusion algorithm based on machine learning. A space-time Fusion Model (SPSTFM) based on Sparse representation opens the beginning of a space-time Fusion algorithm based on dictionary pair learning, and has better processing capability on regions with higher heterogeneity. The space-time Fusion algorithm (STFDCNN) based on the Convolutional Neural network further improves the Fusion precision, which proves the applicability of the Convolutional Neural network in the space-time Fusion field, and then the space-time Fusion method based on the Convolutional Neural network is endless.
Although existing spatiotemporal fusion methods are diverse, many problems still exist, such as: in the regions with higher heterogeneity, the fusion precision of the algorithm is not high; fused images obtained by an algorithm based on a convolutional neural network are usually too smooth; the retention effect of the spectral information is not good. Multi-scale mechanisms and tandem dilation convolution mechanisms are commonly used in the field of video frame super-resolution, and few spatio-temporal fusion methods are introduced. The multi-scale mechanism can fully extract the characteristic information from a plurality of scale perception characteristic graphs, and is beneficial to solving the problem of poor retention effect of the spectral information of the common space-time fusion method; the series expansion convolution mechanism can extract the edge information of the characteristic diagram, and is beneficial to solving the problems that the fused image is too smooth and the spatial detail is seriously lost in the common space-time fusion method. The existing space-time fusion algorithm does not use a mechanism of excessive network collaborative training, in a multi-network model, the network training effect is influenced by a plurality of network output results, so that the network convergence effect is possibly poor.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. A space-time fusion method based on a multi-scale mechanism and a series expansion convolution remote sensing image is provided. The technical scheme of the invention is as follows:
a space-time fusion method based on a multi-scale mechanism and a series expansion convolution remote sensing image comprises the following steps:
s1, combining three images C with high time and low space resolution i Inputting the image into a mapping convolution network, wherein i is 1, 2 and 3, and extracting characteristics through multi-scale perception and series expansion convolution in the mapping convolution network to obtain three transition images with similar resolution to the images with high space and low time resolution;
s2, inputting the transition image and the images with high spatial resolution and low temporal resolution into the reconstructed difference image, and obtaining two difference images with high spatial resolution and low temporal resolution through the multi-network collaborative training;
s3, performing weighted fusion on the two high-spatial resolution difference images and the two high-spatial and low-temporal resolution images, and reconstructing to obtain a high-spatial and high-temporal resolution image F 1
Further, the mapping convolution network of step S1 is composed of a convolution layer, a multi-scale sensing module, and a series expansion convolution module, where the multi-scale sensing module is used to respectively sense multiple scales of the input feature map, and then superimposes the feature map into a new multi-dimensional feature map; the series expansion convolution module can extract richer characteristic information of the image by expanding the receptive field of the convolution layer, and the process of obtaining the image with the transition resolution ratio is as follows:
Figure BDA0003024358400000021
wherein T is i Image representing a transitional resolution, M 0 Mapping function, phi, representing a mapped convolutional network 0 A training weight parameter representing the mapping function.
Further, the step S1 specifically includes the following sub-steps,
s1.1, putting the input high-time and low-spatial-resolution images at three moments into a multi-scale sensing module to obtain a characteristic diagram under multi-scale sensing of the images;
s1.2, inputting the multi-scale sensing characteristic diagram into a series expansion convolution to obtain a dimension reduction characteristic diagram;
s1.3, converting the dimension reduction characteristic graphs obtained from the three images with low spatial resolution into three images T with transitional resolution through convolution operation i (i=1,2,3)。
Further, the reconstructed differential convolutional network and the collaborative training convolutional network of step S2 are respectively composed of eight layers of basic convolutional layers and six layers of basic convolutional layers, where the task of reconstructing the differential convolutional network is more complex, so the network setup is also two layers of basic convolutional layers more than that of the collaborative training convolutional network. Assisting in reconstructing a differential convolution network to complete training according to time correlation through the output of the cooperative training convolution network, and outputting two high-spatial-resolution differential images F T01 And F T12 The process is as follows:
T ij =T i -T j
Figure BDA0003024358400000031
Figure BDA0003024358400000032
wherein T is ij Representing the transition resolution image T at the i-th moment i Transition resolution image T with j time j Difference image therebetween, F 0 And F 2 Respectively represent 0 thTemporal and 2 nd temporal high spatial, low temporal resolution images, M 1 Mapping function, phi, representing a reconstructed sealed convolutional network 1 Represents the mapping function M 1 Training weight parameters of (1).
Further, the step S2 specifically includes the following sub-steps,
s2.1, inputting the three images with the transition resolution and the two images with high space and low time resolution into a reconstruction difference convolution network, and obtaining two differential images F with high space resolution according to the structural correlation of a time sequence T01 And F T12
S2.2, performing collaborative training by adopting known information according to the time correlation of the time sequence, and enabling two known high-space low-time resolution images F 0 And F 2 Outputting a high-resolution differential image F in an input cooperative training convolutional network T02 Using high resolution differential images F T02 Helping to reconstruct a differential convolution network to complete training;
s2.3, obtaining two high-spatial-resolution difference images F through the trained reconstruction difference images T01 And F T12
Further, the step S3 obtains an image F with high spatial and temporal resolution through weighted fusion reconstruction 1 The fusion process is as follows:
Figure BDA0003024358400000041
wherein ω is 0 And ω 2 Respectively as F 0 And F 2 Combining the result pair obtained after the high spatial resolution difference image to finally fuse the reconstruction result F 1 The contribution weight of (1).
Two weight parameters are calculated:
C ij =C i -C j
Figure BDA0003024358400000042
Figure BDA0003024358400000043
wherein C is ij Representing a high temporal, low spatial resolution image C at time i i And j time high time, low spatial resolution image C j Difference image between v C01 And v C12 Respectively represent C 01 And C 12 K is a set constant, so that the condition that the denominator is 0 is avoided.
The invention has the following advantages and beneficial effects:
the invention is based on a convolutional neural network, and uses a cooperative working mechanism of a plurality of networks, a multi-scale mechanism and a series expansion convolution mechanism. The multi-scale mechanism and the tandem expansion convolution mechanism are commonly used in the field of video frame super-resolution, and few spatio-temporal fusion methods are cited. The multi-scale mechanism can fully extract the characteristic information from a plurality of scale perception characteristic graphs, and is beneficial to solving the problem of poor retention effect of the spectral information of the common space-time fusion method; the series expansion convolution mechanism can extract the edge information of the characteristic diagram, and is beneficial to solving the problems that the fused image is too smooth and the spatial detail is seriously lost in the common space-time fusion method. The existing space-time fusion algorithm does not use a mechanism of excessive network collaborative training, in a multi-network model, the network training effect is influenced by a plurality of network output results, so that the network convergence effect is possibly poor. Through the special network, a fusion result with higher accuracy can be obtained, and in the invention, space-time fusion is carried out by using two pairs of images, so that more known information can be fully utilized, and a better reconstruction fusion effect can be obtained.
Drawings
FIG. 1 is a flow chart of a spatial-temporal fusion method based on a multi-scale mechanism and a series expansion convolution remote sensing image in a preferred embodiment;
figure 2 is a graph comparing results with other mainstream algorithms. (a) A reference image; (b) STARFM; (c) ESTARFM; (d) FSDAF; (e) StfNet; (f) DCSTFN; (g) EDCSTFN; (h) the invention relates to a method for preparing a high-temperature-resistant ceramic material.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
FIG. 1 is a flow chart of a spatiotemporal fusion method based on a multi-scale mechanism and a series expansion convolution remote sensing image according to a preferred embodiment of the invention;
the method comprises the following specific steps:
s1, inputting the three images with high time and low spatial resolution into a mapping network, and extracting features through multi-scale perception and series expansion convolution to obtain three transition images with similar resolution to the images with high time and low spatial resolution;
s2, inputting the transition image and the image with high space and low time resolution into the reconstructed difference image, and obtaining two difference images with high space and low time resolution through the collaborative training of multiple networks;
s3, the two differential images and the two images with high space and low time resolution are weighted and fused, and a high space and high time resolution image is obtained through reconstruction.
In order to evaluate the performance of the invention, a classical data set is selected for experiment, and the experimental result is compared with other seven classical space-time fusion algorithms. Where STARFM and ESATRFM are weighted function based algorithms, FSDAF is a hybrid algorithm, StfNet, DCSTFN, edctfn and the present invention convolutional neural network based algorithms.
Fig. 2 shows the experimental results of each method, and it can be clearly seen that the result image of the invention greatly alleviates the problem of the image being too smooth compared with other algorithms. And the STARFM results show severe spectral distortion while the FSDAF results show loss of detail, compared to the fusion results of the present algorithm which are closer to the reference image.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (2)

1. A space-time fusion method based on a multi-scale mechanism and a series expansion convolution remote sensing image is characterized by comprising the following steps:
s1, combining three images C with high time and low space resolution i Inputting the image into a mapping convolution network, wherein i is 1, 2 and 3, and extracting features through multi-scale perception and series expansion convolution in the mapping convolution network to obtain three transition images with similar resolution to the images with high spatial resolution and low temporal resolution;
s2, inputting the transition image and the images with high spatial resolution and low temporal resolution into the reconstructed difference image, and obtaining two difference images with high spatial resolution and low temporal resolution through multi-network collaborative training;
s3, performing weighted fusion on the two high-spatial resolution difference images and the two high-spatial and low-temporal resolution images, and reconstructing to obtain a high-spatial and high-temporal resolution image F 1
The step S1 specifically includes the following sub-steps,
s1.1, putting the input high-time and low-spatial-resolution images at three moments into a multi-scale sensing module to obtain a characteristic diagram under multi-scale sensing of the images;
s1.2, inputting the multi-scale perceived feature map into a series expansion convolution to obtain a dimension reduction feature map;
s1.3, converting the dimension reduction characteristic graphs obtained from the three images with low spatial resolution into three images T with transitional resolution through convolution operation i (i=1,2,3);
The reconstructed differential convolutional network and the collaborative training convolutional network of the step S2 are respectively composed of eight layers of basic convolutional layers and six layers of basic convolutional layers, wherein the task of reconstructing the differential convolutional network is more complicated, so the network setup is also two layers of basic convolutional layers more than that of the collaborative training convolutional network, the reconstructed differential convolutional network is helped to complete training according to time correlation through the output of the collaborative training convolutional network, and two high spatial resolution differential images F are output T01 And F T12 The process is as follows:
T ij =T i -T j
Figure FDA0003641941820000012
Figure FDA0003641941820000011
wherein T is ij Representing the transition resolution image T at the i-th moment i Transition resolution image T with j time j Difference image therebetween, F 0 And F 2 High spatial, low temporal resolution images, M, representing respectively the 0 th and 2 nd moments 1 Mapping function, phi, representing a reconstructed differential convolutional network 1 Represents the mapping function M 1 Training weight parameters of (1);
the step S2 specifically includes the following sub-steps,
s2.1, inputting three images with transition resolution and two images with high space and low time resolutionObtaining two high-spatial resolution difference images F according to the structural correlation of the time sequence in the reconstruction difference convolution network T01 And F T12
S2.2, performing collaborative training by adopting known information according to the time correlation of the time sequence, and enabling two known high-space low-time resolution images F 0 And F 2 Outputting a high-resolution differential image F in an input cooperative training convolutional network T02 Using high resolution differential images F T02 Helping to reconstruct a differential convolution network to complete training;
s2.3, obtaining two high-spatial-resolution difference images F through the trained reconstruction difference images T01 And F T12
Step S3 obtains an image F with high space and high time resolution through weighted fusion reconstruction 1 The fusion process is as follows:
F 1 =ω 0 ·(F 0 +F T01 )+ω 2 ·(F 2 +F T12 ),
wherein ω is 0 And ω 2 Respectively as F 0 And F 2 Combining the result pair obtained after the high spatial resolution difference image to finally fuse the reconstruction result F 1 The contribution weight of (c);
two weight parameters are calculated:
C ij =C i -C j
Figure FDA0003641941820000021
Figure FDA0003641941820000022
wherein C is ij Representing a high temporal, low spatial resolution image C at time i i And j time high time, low spatial resolution image C j Difference image between v C01 And v C12 Respectively represent C 01 And C 12 K is a set constant, so that the condition that the denominator is 0 is avoided.
2. The spatio-temporal fusion method based on the multi-scale mechanism and the series expansion convolution remote sensing image as claimed in claim 1, wherein the mapping convolution network of the step S1 is composed of convolution layers, a multi-scale perception module and series expansion convolution modules, wherein the multi-scale perception module is used for respectively perceiving a plurality of scales of the input feature map and then superposing the feature map into a new multi-dimensional feature map; the series expansion convolution module can extract richer characteristic information of the image by expanding the convolution layer receptive field, and the process of obtaining the image with the transition resolution ratio is as follows:
Figure FDA0003641941820000031
wherein T is i Image representing a transitional resolution, M 0 Mapping function, phi, representing a mapped convolutional network 0 A training weight parameter representing the mapping function.
CN202110412317.6A 2021-04-16 2021-04-16 Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image Active CN113128586B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110412317.6A CN113128586B (en) 2021-04-16 2021-04-16 Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110412317.6A CN113128586B (en) 2021-04-16 2021-04-16 Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image

Publications (2)

Publication Number Publication Date
CN113128586A CN113128586A (en) 2021-07-16
CN113128586B true CN113128586B (en) 2022-08-23

Family

ID=76777414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110412317.6A Active CN113128586B (en) 2021-04-16 2021-04-16 Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image

Country Status (1)

Country Link
CN (1) CN113128586B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310883B (en) * 2023-05-17 2023-10-20 山东建筑大学 Agricultural disaster prediction method based on remote sensing image space-time fusion and related equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263732A (en) * 2019-06-24 2019-09-20 京东方科技集团股份有限公司 Multiscale target detection method and device
CN110705457A (en) * 2019-09-29 2020-01-17 核工业北京地质研究院 Remote sensing image building change detection method
CN111080567A (en) * 2019-12-12 2020-04-28 长沙理工大学 Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
CN111754404A (en) * 2020-06-18 2020-10-09 重庆邮电大学 Remote sensing image space-time fusion method based on multi-scale mechanism and attention mechanism
CN112184554A (en) * 2020-10-13 2021-01-05 重庆邮电大学 Remote sensing image fusion method based on residual mixed expansion convolution
CN112329685A (en) * 2020-11-16 2021-02-05 常州大学 Method for detecting crowd abnormal behaviors through fusion type convolutional neural network
CN112529828A (en) * 2020-12-25 2021-03-19 西北大学 Reference data non-sensitive remote sensing image space-time fusion model construction method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8411938B2 (en) * 2007-11-29 2013-04-02 Sri International Multi-scale multi-camera adaptive fusion with contrast normalization

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263732A (en) * 2019-06-24 2019-09-20 京东方科技集团股份有限公司 Multiscale target detection method and device
CN110705457A (en) * 2019-09-29 2020-01-17 核工业北京地质研究院 Remote sensing image building change detection method
CN111080567A (en) * 2019-12-12 2020-04-28 长沙理工大学 Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
CN111754404A (en) * 2020-06-18 2020-10-09 重庆邮电大学 Remote sensing image space-time fusion method based on multi-scale mechanism and attention mechanism
CN112184554A (en) * 2020-10-13 2021-01-05 重庆邮电大学 Remote sensing image fusion method based on residual mixed expansion convolution
CN112329685A (en) * 2020-11-16 2021-02-05 常州大学 Method for detecting crowd abnormal behaviors through fusion type convolutional neural network
CN112529828A (en) * 2020-12-25 2021-03-19 西北大学 Reference data non-sensitive remote sensing image space-time fusion model construction method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DMNet: A Network Architecture Using Dilated Convolution and Multiscale Mechanisms for Spatiotemporal Fusion of Remote Sensing Images;Weisheng Li;《IEEE Sensors Journal》;20200605;全文 *
条件生成对抗遥感图像时空融合;李昌洁;《中国图象图形学报》;20210331;全文 *

Also Published As

Publication number Publication date
CN113128586A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
CN106910161B (en) Single image super-resolution reconstruction method based on deep convolutional neural network
CN112734646B (en) Image super-resolution reconstruction method based on feature channel division
CN109191382B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN111275618B (en) Depth map super-resolution reconstruction network construction method based on double-branch perception
CN113177882B (en) Single-frame image super-resolution processing method based on diffusion model
CN112819910B (en) Hyperspectral image reconstruction method based on double-ghost attention machine mechanism network
CN109685716B (en) Image super-resolution reconstruction method for generating countermeasure network based on Gaussian coding feedback
CN111815516B (en) Super-resolution reconstruction method for weak supervision infrared remote sensing image
CN113240683B (en) Attention mechanism-based lightweight semantic segmentation model construction method
CN112561799A (en) Infrared image super-resolution reconstruction method
CN112950480A (en) Super-resolution reconstruction method integrating multiple receptive fields and dense residual attention
CN111861886B (en) Image super-resolution reconstruction method based on multi-scale feedback network
CN115760814A (en) Remote sensing image fusion method and system based on double-coupling deep neural network
CN116486074A (en) Medical image segmentation method based on local and global context information coding
CN117576402B (en) Deep learning-based multi-scale aggregation transducer remote sensing image semantic segmentation method
CN117474781A (en) High spectrum and multispectral image fusion method based on attention mechanism
CN114418850A (en) Super-resolution reconstruction method with reference image and fusion image convolution
CN113139974B (en) Focus segmentation model training and application method based on semi-supervised learning
CN113139904A (en) Image blind super-resolution method and system
CN113128586B (en) Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image
CN109615576A (en) The single-frame image super-resolution reconstruction method of base study is returned based on cascade
CN113888399B (en) Face age synthesis method based on style fusion and domain selection structure
CN111626296A (en) Medical image segmentation system, method and terminal based on deep neural network
CN117576483B (en) Multisource data fusion ground object classification method based on multiscale convolution self-encoder
CN112767277A (en) Depth feature sequencing deblurring method based on reference image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant