CN114445306B - Remote sensing image fusion method based on detail injection model - Google Patents
Remote sensing image fusion method based on detail injection model Download PDFInfo
- Publication number
- CN114445306B CN114445306B CN202210126647.3A CN202210126647A CN114445306B CN 114445306 B CN114445306 B CN 114445306B CN 202210126647 A CN202210126647 A CN 202210126647A CN 114445306 B CN114445306 B CN 114445306B
- Authority
- CN
- China
- Prior art keywords
- image
- detail
- full
- frequency
- component
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000002347 injection Methods 0.000 title claims abstract description 16
- 239000007924 injection Substances 0.000 title claims abstract description 16
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 15
- 230000004927 fusion Effects 0.000 claims abstract description 24
- 238000002945 steepest descent method Methods 0.000 claims abstract description 7
- 238000000034 method Methods 0.000 claims description 31
- 238000001914 filtration Methods 0.000 claims description 18
- 230000008569 process Effects 0.000 claims description 12
- 238000005096 rolling process Methods 0.000 claims description 8
- 238000004885 tandem mass spectrometry Methods 0.000 claims description 3
- 230000006978 adaptation Effects 0.000 claims description 2
- 230000003044 adaptive effect Effects 0.000 claims description 2
- 238000010586 diagram Methods 0.000 abstract 2
- 230000003595 spectral effect Effects 0.000 description 6
- 238000001228 spectrum Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10036—Multispectral image; Hyperspectral image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10041—Panchromatic image
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention belongs to the field of remote sensing image fusion, and particularly relates to a remote sensing image fusion method based on a detail injection model, which comprises the steps of constructing a full-color image into an N-dimensional image according to the number N of wave bands of a multispectral image, and carrying out histogram matching on the wave bands to obtain a second full-color image; acquiring first high-frequency detail components of a second full-color image and a multispectral image, acquiring a third full-color image from the first high-frequency detail components of the second full-color image according to pixel salience, and extracting a first detail image from the third full-color image; acquiring a second high-frequency detail component from the first high-frequency detail component by using a guide filter, calculating a residual error between the second high-frequency detail component and the first high-frequency detail component, and acquiring a second detail diagram according to the residual error and the first detail diagram; estimating a detail image after detail enhancement according to a steepest descent method to obtain a third detail image, and fusing the third detail image with the multispectral image to obtain a fusion result; the invention has lower time complexity and can obtain more abundant detail information.
Description
Technical Field
The invention belongs to the field of remote sensing image fusion, and particularly relates to a remote sensing image fusion method based on a detail injection model.
Background
With the continuous development of satellite sensor technology, the remote sensing data acquisition capability is further improved. Limited by the imaging mechanism, satellites have difficulty obtaining remote sensing images with both high spatial and spectral resolution. Currently, most commercial satellites, such as QuickBird, IKONOS, geoEye and world view, etc., can acquire both full color (Pan) and Multispectral (MS) images. Because of good complementarity between the full-color image and the multispectral image in the same time and the same scene, the multispectral remote sensing image with high spatial resolution can be obtained by the image fusion technology so as to carry out image classification, target identification, change detection, feature extraction, image segmentation and the like. Therefore, how to combine the complementary information of the PAN image and the MS image is a problem to be solved, and the fusion research of the multispectral image and the full-color image is of great significance.
The fusion technique of full color and multispectral images mainly comprises three modes: component replacement-based methods (CS, component Substitution), multi-resolution analysis-based methods (MRA, multiresolution Analysis), and variational optimization-based methods (VO, variational optimization). The basic idea of the CS method is to extract high-frequency detail information from the PAN image and inject it into the MS image, and its representative algorithm is: principal component analysis, schmidt orthogonalization, and the like. Typically, such methods have low temporal complexity but are prone to spectral distortion because such fusion algorithms do not adequately account for the retention of spectral information. The MRA-based method comprises the steps of decomposing an original image into a plurality of different scales by using pyramid or wavelet transformation, fusing sub-band images of all scales by adopting proper fusion rules, and obtaining a fused image by using inverse transformation, wherein the representative algorithm comprises the following steps: atrus wavelet transform, non-downsampled contourlet transform, etc. In the whole, the anisotropy of the ground objects in the image is considered, so that the method can keep better spectrum information, but the quality of the fusion result is seriously dependent on the fusion rule, the number of decomposition layers and the number of directions. The VO-based model is an important full-color sharpening method, and generally comprises two parts of energy functional design and solving, wherein representative algorithms are as follows: p+xs, sparse expression, etc. Although the method has better fidelity, the calculation process is complex and the time cost is higher.
The existing fusion technology of full color and multispectral images still has the following defects:
(1) The algorithm is time inefficient. Compared with the traditional RGB image, the multispectral image has the characteristics of high-dimensional spectrum channel, large data volume and the like, and a plurality of algorithms at present have higher time complexity, for example, some methods based on deep learning and variation optimization can not meet the actual demands with higher real-time requirements to a certain extent.
(2) The spectral information is distorted. Because the image low-frequency information contains a large amount of spectral features, many algorithms at present separate the low-frequency information of the full-color image and the multispectral image and then fuse the low-frequency information. The quality of the fusion result is seriously dependent on the selection of the fusion method and the setting of parameters, and the phenomenon of serious spectrum distortion can occur due to the improper method and parameters.
(3) Spatial detail information is lost. The algorithm based on the detail injection model generally injects the extracted detail information of the single wave band into each wave band of the original multispectral image in sequence, but often ignores the difference between different wave bands of the original multispectral image, so the mode easily causes the problem of lower detail definition of the fused image.
Disclosure of Invention
Aiming at the problems of low time efficiency and space detail information loss of the algorithm in the prior art, the invention provides a remote sensing image fusion method based on a detail injection model, which specifically comprises the following steps:
s1, if the band number of the multispectral image is N, constructing a full-color image into an N-dimensional image, and recording the N-dimensional image as a first full-color image, wherein the full-color image is the same as the original full-color image in any dimension;
s2, carrying out band-by-band histogram matching on the N-dimensional panchromatic image and the multispectral to obtain an N-dimensional panchromatic image subjected to histogram matching, and marking the N-dimensional panchromatic image as a second panchromatic image;
S3, performing iterative filtering on the second full-color image and the multispectral image by using rolling guide filtering to obtain respective low-pass images, and obtaining respective first high-frequency detail components by differential operation;
S4, acquiring a single-band detail image of the full-color image from the second full-color image according to pixel significance judgment, and marking the single-band detail image as a third full-color image;
s5, obtaining a high-frequency brightness component from the high-frequency detail components of the third full-color image and the multispectral image through an AIHS algorithm;
S6, extracting detail information except high-frequency brightness components from the third full-color image, and recording the detail information as a first detail image;
S7, filtering the first high-frequency detail component of the second full-color image and the first high-frequency detail component of the multispectral image by using a guide filter to obtain respective second high-frequency detail components;
S8, taking the sum of the residual errors of the first high-frequency detail component and the second high-frequency detail component of the multispectral image and the first detail image as a second detail image;
s9, estimating a detail map after detail enhancement according to a steepest descent method by taking the second detail map as an initial value to obtain a third detail map;
S10, fusing the multispectral image with the third detail image, and obtaining a high-frequency component of the brightness component of the remote sensing image according to the fusion result to finish the fusion.
Further, the process of acquiring the first high frequency detail components of the second full color image and the multispectral image includes the steps of:
PansL=RGF(Pans,σs,σr,t)
MSL=RGF(MS,σs,σr,t)
PansH=Pans-PansL
MSH=MS-MSL
Wherein Pans L is the low-pass image of the second full-color image; RGF () stands for rolling guide filtering operation; pans is the second panchromatic image; σ s is the spatial standard deviation, the value of which controls the size of the filtering scale; σ r is the weight parameter of the Gaussian function in the gray scale; t represents the iteration number; MS L is a low-pass image of a multispectral image; MS is a multispectral image; pans H is the first high-frequency detail component of the second panchromatic image; MS H is the first high-frequency detail component of the multispectral image.
Further, the acquiring process of the third full-color image includes:
wherein Pan H (r, k) represents a pixel point of the kth column of the kth row of the third full-color image; pixel points representing an r-th row and a k-th column in an i-th dimensional image of a second full-color image, i= {1, 2., N }; abs () represents an absolute value operation.
Further, the first detailed view is expressed as:
wherein, A first detail map corresponding to the i-th dimension full-color image; A first high frequency detail component of an i-th dimensional image that is a second full-color image; i H is a high-frequency luminance component.
Further, the process of obtaining the high-frequency luminance component I from the high-frequency detail components of the third full-color image and the multispectral image by the AIHS algorithm includes:
wherein α i represents an adaptation factor; representing a first high frequency detail component of the multispectral image of the ith band.
Further, the process of obtaining the second detailed graph includes the following steps:
wherein, A second high frequency detail component obtained from the pilot filter; GF () is the pilot filter; pans H is the first high-frequency detail component of the second panchromatic image; MS H is the first high-frequency detail component of the multispectral image; m is a filter window; gamma is a penalty factor; res is the residual of the first high frequency detail component and the second high frequency detail component of the multispectral image; A first detail map corresponding to the i-th dimension full-color image; d final is a second detail drawing.
Further, the process of estimating the detail map after detail enhancement according to the steepest descent method comprises the following steps:
wherein, Detail graph of the iteration result of the n+1st time; Representing an ith dimension third detail graph obtained by an nth iteration result; initially, the method comprises the steps of The value is a second detail graph; f is a minimized difference function; d i is the third detail view of the ith dimension; f (D i) is a result image with the smallest difference from Pan H obtained by solving the minimized difference function F; pan H is a third full color image; MS H is the first high-frequency detail component of the multispectral image; the term "norms" means norms.
Further, the high frequency component of the luminance component of the remote sensing image is expressed as:
wherein, A high frequency component which is a luminance component of the remote sensing image; f i is an ideal fusion result image of the ith dimension; MS H is the first high-frequency detail component of the multispectral image; d i is the third detail view of the ith dimension; n is the band number of the multispectral image.
Compared with the prior art, the invention has the following beneficial effects:
(1) The invention involves simple filtering, differential operation, etc., so the invention has lower time complexity, and for 256×256 size input image, the invention can obtain processing result in second level.
(2) Because the rolling guide filter has the advantages of scale perception and edge preservation, the low-frequency image obtained by the rolling guide filter under ideal conditions is assumed to preserve less gradient information, and the high-frequency image obtained by differential operation has abundant detail information. Therefore, the invention can better maintain the space structure characteristics.
In summary, the present invention is different from the conventional detail injection method, and it is considered that different detail information should be injected between different bands of the original multispectral image, so that the present invention extracts multi-level detail residual errors by using guided filtering and differential operation to reduce spatial spectrum distortion.
Drawings
FIG. 1 is a flow chart of the invention based on detailed information extraction;
fig. 2 is a flowchart of a remote sensing image fusion method based on a detail injection model.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a remote sensing image fusion method based on a detail injection model, which specifically comprises the following steps of:
s1, if the band number of the multispectral image is N, constructing a full-color image into an N-dimensional image, and recording the N-dimensional image as a first full-color image, wherein the full-color image is the same as the original full-color image in any dimension;
s2, carrying out band-by-band histogram matching on the N-dimensional panchromatic image and the multispectral to obtain an N-dimensional panchromatic image subjected to histogram matching, and marking the N-dimensional panchromatic image as a second panchromatic image;
S3, performing iterative filtering on the second full-color image and the multispectral image by using rolling guide filtering to obtain respective low-pass images, and obtaining respective first high-frequency detail components by differential operation;
S4, acquiring a single-band detail image of the full-color image from the second full-color image according to pixel significance judgment, and marking the single-band detail image as a third full-color image;
s5, obtaining a high-frequency brightness component from the high-frequency detail components of the third full-color image and the multispectral image through an AIHS algorithm;
S6, extracting detail information except high-frequency brightness components from the third full-color image, and recording the detail information as a first detail image;
s7, filtering the high-frequency detail components of the second full-color image and the high-frequency detail components of the multispectral image by using a guide filter to obtain respective second high-frequency detail components;
S8, taking the sum of the residual errors of the first high-frequency detail component and the second high-frequency detail component of the multispectral image and the first detail image as a second detail image;
s9, estimating a detail map after detail enhancement according to a steepest descent method by taking the second detail map as an initial value to obtain a third detail map;
S10, fusing the multispectral image with the third detail image, and obtaining a high-frequency component of the brightness component of the remote sensing image according to the fusion result to finish the fusion.
In this embodiment, based on the flowchart frame of fig. 1 based on detailed information extraction, the implementation process of the present invention includes the following steps:
Step 1: considering that certain differences exist among different wave bands of the MS image, assuming that the wave band number of the MS image is N, in order to facilitate subsequent processing, firstly constructing the Pan image into an N-dimensional image, wherein any dimension Pan image is identical to the original Pan image.
Step 2: and (3) performing band-by-band histogram matching on the N-dimensional Pan image and the MS image in the step (1) to obtain a histogram matched N-dimensional Pan image, and marking the N-dimensional Pan image as Pans.
Step 3: the Pans and the original multispectral image are subjected to iterative filtering by utilizing rolling guide filtering to obtain low-pass images Pans L and MS L, and then the respective high-frequency detail components are obtained through differential operation, wherein the calculation mode is as follows:
PansL=RGF(Pans,σs,σr,t)
MSL=RGF(MS,σs,σr,t)
PansH=Pans-PansL
MSH=MS-MSL
Wherein Pans H represents the high frequency detail component of Pans; MS H represents the high frequency detail component of the MS.
Step 4: pans H is subjected to a pixel saliency judgment method to obtain a fine single-band detail image Pan H, and a fusion rule with a large coefficient absolute value in a multi-resolution analysis model is inspired, wherein the saliency judgment method is defined as follows:
Wherein i, j is the abscissa of image Pan H; max (abs (·)) represents an absolute value-fetching operation. Then, pan H and MS H estimate the high-frequency luminance component I H by the AIHS algorithm, which has the following mathematical expression:
Step 5: extracting extra owned details of Pans H relative to I H The implementation process can be represented by the following formula:
where i represents a dimension, which ranges from 1 to N.
Step 6: since GF has excellent detail transfer characteristics, a guided filter and a difference operation are used for extracting detail residual Res, and Res and difference operation are combinedA final detail map D final is obtained, which is mathematically expressed as follows:
Wherein the method comprises the steps of Representing intermediate results obtained by GF filtering; m is a filter window; gamma is a penalty factor.
Step 7: the invention fuses the high frequency component of the brightness component of the result imageExpressed as:
Wherein F i is the fusion result of the ith wave band; d i is a detail view with detail enhancement. When the influence of the low frequency information is eliminated by step 3, D final has the same spectral characteristics as the MS image, so that the adaptive factor obtained when estimating the high frequency luminance component I H by AIHS in step 4 does not have to be updated. Wherein D i can be estimated by the steepest descent method:
In the above formula, eta is the step length, and initially The value is D final.
Step 8: after detail view D i with enhanced detail is obtained. The fusion result can be written as follows:
Fi=MSi+Di
Wherein F i represents the high-spatial-resolution multispectral image obtained by fusion; MS i and D i represent the original multispectral image and the final detail map, respectively.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (8)
1. The remote sensing image fusion method based on the detail injection model is characterized by comprising the following steps of:
s1, if the band number of the multispectral image is N, constructing a full-color image into an N-dimensional image, and recording the N-dimensional image as a first full-color image, wherein the full-color image is the same as the original full-color image in any dimension;
s2, carrying out band-by-band histogram matching on the N-dimensional panchromatic image and the multispectral to obtain an N-dimensional panchromatic image subjected to histogram matching, and marking the N-dimensional panchromatic image as a second panchromatic image;
S3, performing iterative filtering on the second full-color image and the multispectral image by using rolling guide filtering to obtain respective low-pass images, and obtaining respective first high-frequency detail components by differential operation;
S4, acquiring a single-band detail image of the full-color image from the second full-color image according to pixel significance judgment, and marking the single-band detail image as a third full-color image;
s5, obtaining a high-frequency brightness component from the high-frequency detail components of the third full-color image and the multispectral image through an AIHS algorithm;
S6, extracting detail information except high-frequency brightness components from the third full-color image, and recording the detail information as a first detail image;
s7, filtering the high-frequency detail components of the second full-color image and the high-frequency detail components of the multispectral image by using a guide filter to obtain respective second high-frequency detail components;
S8, taking the sum of the residual errors of the first high-frequency detail component and the second high-frequency detail component of the multispectral image and the first detail image as a second detail image;
s9, estimating a detail map after detail enhancement according to a steepest descent method by taking the second detail map as an initial value to obtain a third detail map;
S10, fusing the multispectral image with the third detail image, and obtaining a high-frequency component of the brightness component of the remote sensing image according to the fusion result to finish the fusion.
2. The remote sensing image fusion method based on a detail injection model according to claim 1, wherein the process of acquiring the first high frequency detail components of the second full-color image and the multispectral image comprises the following steps:
PansL=RGF(Pans,σs,σr,t)
MSL=RGF(MS,σs,σr,t)
PansH=Pans-PansL
MSH=MS-MSL
Wherein Pans L is the low-pass image of the second full-color image; RGF () stands for rolling guide filtering operation; pans is the second panchromatic image; σ s is the spatial standard deviation, the value of which controls the size of the filtering scale; σ r is the weight parameter of the Gaussian function in the gray scale; t represents the iteration number; MS L is a low-pass image of a multispectral image; MS is a multispectral image; pans H is the first high-frequency detail component of the second panchromatic image; MS H is the first high-frequency detail component of the multispectral image.
3. The remote sensing image fusion method based on a detail injection model according to claim 1, wherein the obtaining process of the third full-color image comprises:
wherein Pan H (r, k) represents a pixel point of the kth column of the kth row of the third full-color image; pixel points representing an r-th row and a k-th column in an i-th dimensional image of a second full-color image, i= {1, 2., N }; abs () represents an absolute value operation.
4. The remote sensing image fusion method based on a detail injection model according to claim 1, wherein the first detail map is represented as:
wherein, A first detail map corresponding to the i-th dimension full-color image; A first high frequency detail component of an i-th dimensional image that is a second full-color image; i H is a high-frequency luminance component.
5. The method of claim 1, wherein the step of obtaining the high-frequency luminance component I H from the high-frequency detail components of the third full-color image and the multispectral image by using an AIHS algorithm comprises:
constraint conditions: alpha i≥0,...,αN is greater than or equal to 0
Wherein α i represents an adaptation factor; representing a first high frequency detail component of the multispectral image of the ith band.
6. The remote sensing image fusion method based on a detail injection model as claimed in claim 1, wherein the process of obtaining the second detail map comprises the following steps:
wherein, A second high frequency detail component obtained from the pilot filter; GF () is the pilot filter; pans H is the first high-frequency detail component of the second panchromatic image; MS H is the first high-frequency detail component of the multispectral image; m is a filter window; gamma is a penalty factor; res is the residual of the first high frequency detail component and the second high frequency detail component of the multispectral image; A first detail map corresponding to the i-th dimension full-color image; d final is a second detail drawing.
7. The remote sensing image fusion method based on a detail injection model according to claim 1, wherein the process of estimating the detail map after detail enhancement according to the steepest descent method comprises:
wherein, Detail graph of the iteration result of the n+1st time; Representing an ith dimension third detail graph obtained by an nth iteration result; initially, the method comprises the steps of The value is a second detail graph; f is a minimized difference function; d i is the third detail view of the ith dimension; f (D i) is a result image with the smallest difference from Pan H obtained by solving the minimized difference function F; pan H is a third full color image; MS H is the first high-frequency detail component of the multispectral image; the term "norms" means norms.
8. The remote sensing image fusion method based on the detail injection model according to claim 1, wherein the high frequency component of the luminance component of the remote sensing image is represented as:
wherein, A high frequency component which is a luminance component of the remote sensing image; alpha i is an adaptive factor; f is an ideal fusion result image; MS H is the first high-frequency detail component of the multispectral image; d i is a detail image to be injected; n is the band number of the multispectral image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210126647.3A CN114445306B (en) | 2022-02-10 | 2022-02-10 | Remote sensing image fusion method based on detail injection model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210126647.3A CN114445306B (en) | 2022-02-10 | 2022-02-10 | Remote sensing image fusion method based on detail injection model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114445306A CN114445306A (en) | 2022-05-06 |
CN114445306B true CN114445306B (en) | 2024-09-24 |
Family
ID=81370817
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210126647.3A Active CN114445306B (en) | 2022-02-10 | 2022-02-10 | Remote sensing image fusion method based on detail injection model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114445306B (en) |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109146814B (en) * | 2018-08-20 | 2021-02-23 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
CN109993717A (en) * | 2018-11-14 | 2019-07-09 | 重庆邮电大学 | A kind of remote sensing image fusion method of combination guiding filtering and IHS transformation |
AU2020100179A4 (en) * | 2020-02-04 | 2020-03-19 | Huang, Shuying DR | Optimization Details-Based Injection Model for Remote Sensing Image Fusion |
CN111539900B (en) * | 2020-04-24 | 2023-03-24 | 河南大学 | IHS remote sensing image fusion method based on guided filtering |
-
2022
- 2022-02-10 CN CN202210126647.3A patent/CN114445306B/en active Active
Non-Patent Citations (1)
Title |
---|
基于多尺度分解与细节提取的遥感图像融合算法研究;王欧;《万方数据》;20230706;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114445306A (en) | 2022-05-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110533620B (en) | Hyperspectral and full-color image fusion method based on AAE extraction spatial features | |
Liu et al. | A 3-D atrous convolution neural network for hyperspectral image denoising | |
CN110415199B (en) | Multispectral remote sensing image fusion method and device based on residual learning | |
CN114119444B (en) | Multi-source remote sensing image fusion method based on deep neural network | |
Li et al. | Hyperspectral pansharpening via improved PCA approach and optimal weighted fusion strategy | |
CN110717354A (en) | Superpixel classification method based on semi-supervised K-SVD and multi-scale sparse representation | |
CN113763299B (en) | Panchromatic and multispectral image fusion method and device and application thereof | |
Scheunders et al. | Wavelet denoising of multicomponent images using Gaussian scale mixture models and a noise-free image as priors | |
Xiong et al. | Pan-sharpening based on convolutional neural network by using the loss function with no-reference | |
CN111696043A (en) | Hyperspectral image super-resolution reconstruction algorithm of three-dimensional FSRCNN | |
Long et al. | Dual self-attention Swin transformer for hyperspectral image super-resolution | |
Gu et al. | Hyperspectral intrinsic image decomposition with enhanced spatial information | |
Gao et al. | Improving the performance of infrared and visible image fusion based on latent low-rank representation nested with rolling guided image filtering | |
CN111008664A (en) | Hyperspectral sea ice detection method based on space-spectrum combined characteristics | |
Benzenati et al. | Generalized Laplacian pyramid pan-sharpening gain injection prediction based on CNN | |
Duran et al. | Restoration of pansharpened images by conditional filtering in the PCA domain | |
Qu et al. | Hyperspectral and panchromatic image fusion via adaptive tensor and multi-scale retinex algorithm | |
Zhang et al. | SC-PNN: Saliency cascade convolutional neural network for pansharpening | |
CN112200123A (en) | Hyperspectral open set classification method combining dense connection network and sample distribution | |
CN106157269A (en) | Full-colour image sharpening method based on direction multiple dimensioned group low-rank decomposition | |
CN112598708B (en) | Hyperspectral target tracking method based on four-feature fusion and weight coefficient | |
CN109064402B (en) | Single image super-resolution reconstruction method based on enhanced non-local total variation model prior | |
Yuan et al. | ROBUST PCANet for hyperspectral image change detection | |
CN115100075A (en) | Hyperspectral panchromatic sharpening method based on spectral constraint and residual error attention network | |
CN114445306B (en) | Remote sensing image fusion method based on detail injection model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20240829 Address after: No. 35 Yueyang Avenue East, Yueyang Economic and Technological Development Zone, Yueyang City, Hunan Province 414000 Applicant after: Yueyang Surveying and Mapping Institute Co.,Ltd. Country or region after: China Address before: 400065 Chongwen Road, Nanshan Street, Nanan District, Chongqing Applicant before: CHONGQING University OF POSTS AND TELECOMMUNICATIONS Country or region before: China |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |