CN101216557A - Residual hypercomplex number dual decomposition multi-light spectrum and full-color image fusion method - Google Patents

Residual hypercomplex number dual decomposition multi-light spectrum and full-color image fusion method Download PDF

Info

Publication number
CN101216557A
CN101216557A CNA2007101732929A CN200710173292A CN101216557A CN 101216557 A CN101216557 A CN 101216557A CN A2007101732929 A CNA2007101732929 A CN A2007101732929A CN 200710173292 A CN200710173292 A CN 200710173292A CN 101216557 A CN101216557 A CN 101216557A
Authority
CN
China
Prior art keywords
image
residual
full
multispectral
resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2007101732929A
Other languages
Chinese (zh)
Other versions
CN101216557B (en
Inventor
杨惠娟
张建秋
胡波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN2007101732929A priority Critical patent/CN101216557B/en
Publication of CN101216557A publication Critical patent/CN101216557A/en
Application granted granted Critical
Publication of CN101216557B publication Critical patent/CN101216557B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention belongs to the field of image fusion technology, in particular to a multi-spectral image and panchromatic image fusion method based on residual hypercomplex symplectic decomposition. The method comprises the following steps of : respectively constructing models of residual images of a multi-spectral image and a panchromatic image by using hypercomplex, and respectively subjecting the hypercomplex residual models of the multi-spectral image to hypercomplex symplectic decomposition along the grayscale axis direction to obtain a simplex portion containing brightness information and a perplex portion including chromaticity information. The analysis shows that the residual of the multi-spectral image with high resolution can be recovered by substituting the simplex portion obtained by decomposing the low-resolution multi-spectral image with hypercomplex residual image of a high-resolution panchromatic image, and the fusion of the multi-spectral image and the panchromatic image can be achieved by synthesizing symplectic decomposition. The simulation result shows that the invention is free of visible spectral distortion. The evaluation result of various prior image fusion methods shows that the invention is superior to HIS, PCA and wavelet transformation fusion method.

Description

Multispectral and the panchromatic image fusion method that residual error supercomplex antithesis decomposes
Technical field
The invention belongs to the image fusion technology field, be specifically related to the multispectral and panchromatic image fusion method that a kind of residual error supercomplex Symplectic (antithesis) decomposes.
Background technology
At present, earth observation satellite provides the many spaces of increasing covering the same area, multiresolution, multidate and multispectral image, for carry out topographic mapping and map renewal, land use classes, crops and forest classified, ice and snow/flood monitoring etc. provides rich data.In order to utilize these data, people need be merged multispectral and full-colour image.
Based on intensity-colourity-saturation degree (Intensity-Hue-Saturate, IHS) the multispectral and panchromatic image fusion method of conversion [1]Become a standard procedure of earth observation graphical analysis, the color that it can be used for the view data of height correlation strengthens and improves fusion treatment such as spatial resolution.The situation that the IHS transform method of standard is suitable for is, full-colour image and be height correlation from the luminance component of multispectral acquisition.; when the spectral range of full-colour image does not cover all wave bands of multispectral image and/or full-colour image and multispectral image is not when obtaining simultaneously; obtain widely different between the luminance component of multispectral image and the full-colour image by the IHS conversion; if this moment carries out full-colour image with the IHS transform method and multispectral image merges, serious spectrum distortion will appear [1]Equally, pivot analysis (Principle Component Analysis, PCA) fusion method, utilize the high resolving power full-colour image to replace the first principal component of low resolution multispectral image simply, can cause the loss of some spectral characteristics in the low resolution multispectral image, therefore make the result images that merges that serious spectrum distortion takes place [2]Though the high-pass filtering fusion method has kept the information of multispectral image, to high-definition picture filtering the time, many texture informations have been filtered [3]The wavelet transformation fusion method can keep the spectral information in the multispectral image preferably, but its syncretizing effect is subjected to the influence of wavelet decomposition progression.In addition, the fusion of wavelet coefficient can destroy the orthogonality of its positive inverse transformation and cause spectrum leakage, thereby makes the result images of its fusion produce blocking artifact [4]
Can find that said method all is to replace certain composition by certain conversion, not consider the complete spectral information of full-colour image fully.Exist when mentioning real vector signal in the document [5] and be and decision, ignore and change arbitrary component by each component, all can not the original vector signal of reconstruct.
Multispectral image is exactly a kind of vector signal, and its each pixel is all represented with red (R), green (G) and blue (B) three primary colors vector signal.If by certain conversion, transform methods such as foregoing IHS and PCA etc., they are that the R of each vector pixel, G and B component are put into its represented matrix as an element respectively, this matrix is being carried out the needed matrixings of fusion method such as IHS and PCA so, decomposition, replacement and inverse transformation are when (for example pivot is decomposed and replaced), RGB three primary colors certain location relation, for example, if establish r 1, g 1And b 1And r 2And b 2Be respectively two vector pixels that initial multispectral coloured image is represented with the RGB three primary colors, the conversion of matrix so, decomposition, replacement and inverse transformation, the component of a certain vector pixel is mixed with the component of other vector pixel, thereby generation is different from various combination that may be different of initial multispectral coloured image RGB three primary colors position relation (as making r 1, g 1And b 1Become r 1, g 2And b 1And r 2, g 2And b 2Become r 2, g 1And b 2).This means that such conversion has destroyed the particular kind of relationship of original image on RGB three primary colors vector space positions, its fusion results image will cause distortion so.Here it is, and why the unknown so far and inexplicable IHS and PCA fusion method can produce the reason of color distortion [6]
Summary of the invention
In order to overcome this shortcoming that existing Multispectral Image Fusion Methods exists, the vector pixel that the present invention will represent multispectral image is carried out integral body with super-complex vector and is described and handle, purpose is that it is maintained in follow-up conversion with in handling at the particular kind of relationship on the RGB three primary colors vector space positions, to avoid handling the distortion of back image.
The present invention is at first by obtaining the residual image of multispectral image and full-colour image respectively, adopt the super-complex vector pixel directly to carry out integral body to the residual image of the multispectral image represented with RGB and describe, with the particular kind of relationship of maintenance RGB three primary colors vector pixel on vector space positions.And for the residual image of full-colour image, because its pixel is a scalar, for the expression that makes it so that subsequent treatment and fusion consistent with colored residual image, we are the vector pixel of three-dimensional with its each scalar pixel-expansion [7]Simultaneously, for the energy that guaranteed the vector pixel consistance when representing with the scalar pixel, we multiply by each scalar pixel value
Figure S2007101732929D00021
The residual image of full-colour image behind the vector quantization is described and has been handled with regard to also carrying out integral body with super-complex vector like this.Secondly, the residual image of the multispectral image represented with supercomplex is carried out supercomplex symplectic along the direction of gray scale axle decompose, obtain comprising simplex (list) part and perplex (answering) part that comprises chrominance information of monochrome information.We know because full-colour image has comprised the luminance detail information abundanter than its corresponding multispectral image, this means if the simplex part that its residual image monochrome information of full-colour image is replaced the multispectral image residual image, so just can recover that high-resolution multispectral image does not have and the luminance detail information that is present in full-colour image.Analyze and simulation result shows: the fusion method of the multispectral and full-colour image that decomposes by this residual error supercomplex and available high-resolution multi-spectral fused images, owing to adopt supercomplex that the vector pixel is described and handles, can when having kept the details spectral information of full-colour image, can not produce color distortion.
The concrete steps of the inventive method are as follows:
(1) for multispectral image MS, by interpolation algorithm Z, the multispectral image of low resolution is amplified to the size same with full-colour image, obtain interpolation image I;
(2) interpolation image I is carried out low-pass filtering and down-sampled, the low-resolution image that obtains former multispectral image is estimated MS LEstimate MS with multispectral image MS and image then LSubtract each other the residual image e that obtains multispectral image gFull-colour image MP through same low-pass filtering, is obtained the estimation MP of former full-colour image L, subtract each other with it with former full-colour image again, can obtain the residual image e of full-colour image p
(3) with residual image e gBe amplified to residual image e with full-colour image pSame size obtains e f';
(4) to e fWith e behind the vector quantization pWith carrying out the supercomplex modeling, obtain Qe respectively f(x, y) and Qe p(x, y);
(5) to Qe f(x, y) it carries out supercomplex symplectic and decomposes along the direction of gray scale axle, obtains the simplex part f of packet content monochrome information 1(x is y) with the perplex part f that comprises chrominance information 2(x, y);
(6) use Qe p(x y) replaces f 1(x, y), the residual error after being restored: e f' (x, y)=Qe p(x, y)+f 2(x, y) μ 2
(7) obtain fusion results: MS '=I+e at last f'.
Generally speaking, multispectral image and full-colour image have different resolution.In order to merge their information, at first need the low resolution multispectral image is transformed into image with the same resolution of full-colour image resolution.This conversion is normally finished with interpolation.For multispectral image MS, if represent the interpolation algorithm of a certain image, the multispectral image of low resolution is amplified to the size same with full-colour image with Z, promptly the estimated statement that obtains the high-definition picture I of its interpolation by interpolation algorithm Z is shown:
I=Z(MS) (1)
The estimation to high-resolution multi-spectral image MS ' of I with regard to representing to obtain so with interpolation algorithm Z.I with the residual error of true high-definition picture MS ' is:
e f=MS’-I=MS’-Z(MS) (2)
E in the following formula fBe called high-resolution residual image and refer to detail of the high frequency in the multispectral image [10]
The analysis showed that of document [8]: if low-pass filtering and down-sampled H are acted on e f, just be equivalent to I is carried out low-pass filtering and down-sampled, obtain estimation MS than the lower image of resolution of former multispectral image L, with MS and MS LDifference can obtain the residual image e of multispectral image g:
e g=He f=H[MS′-Z(MS)]=HMS′-HI≈MS-MS L (3)
e gAnd e fBetween have certain linear mapping relation, so can use e gEstimate e fThis conclusion of document [8] means, for full-colour image, if after allowing it by above-mentioned same low-pass filtering, uses former full-colour image and it image subtraction by low-pass filtering again, obtains the residual image e of full-colour image pTo comprise the detail of the high frequency that multispectral image MS does not comprise, this is because the resolution of multispectral image MS is lower than the resolution of full-colour image.These are analyzed and have illustrated that also multispectral image is carried out residual error to be extracted, and can separate image space information and obtain its detailed information, the image detail information that the multispectral image that promptly obtains mainly comprises with spectral information.Equally, the residual image of full-colour image has also comprised its detail of the high frequency, and these details are the detailed information that lacked of multispectral image residual error just.Therefore, if can strengthen the spatial information of multispectral image by go to strengthen the residual error of multispectral image someway with the residual image of full-colour image, the spectral information of the multispectral image that the while also keeps.
Utilize the residual image modeling of supercomplex to the multispectral image represented by RGB three colouring components [9]:
f(x,y)=r(x,y)i+g(x,y)j+b(x,y)k, (4)
Wherein ((x, y), (x, y), (x y) represents the RGB component of multispectral image respectively to b to g to r to f for x, y) the supercomplex model for being represented by R, G, B three colouring components.Above-mentioned expression is expressed as the integral body of a vector to RGB three colouring components of describing multispectral image by supercomplex, and so no matter by which kind of conversion, this vector globality can both keep, and the relative position between the RGB composition of vector pixel just can not change.
For the spectral information of the multispectral image of the spatial information that better merges high-resolution full-colour image and low resolution, we carry out the supercomplex modeling respectively to the residual error of full-colour image and multispectral image.Yet for the residual image of full-colour image,, need be three-dimensional vector pixel therefore,, each pixel value be multiply by for the amplitude that has guaranteed the vector pixel is consistent with the scalar pixel with each scalar pixel-expansion because its pixel is a scalar
Figure S2007101732929D00041
So, just obtain the supercomplex model of residual image of the full-colour image of the multispectral image represented by RGB and vector quantization:
Q e p ( x , y ) = 1 3 e p ( x , q ) i + 1 3 e p ( x , y ) j + 1 3 e p ( x , y ) k - - - ( 5 )
Qe f(x,y)=r(x,y)i+g(x,y)j+b(x,y)k
Wherein, e p(x y) is the pixel value of the residual image of full-colour image, r (x, y), g (x, y), (x y) is respectively e to b f(x, RGB component y).
It is the plural form that a hypercomplex number is defined as broad sense that the Cayley-Dickson of hypercomplex number decomposes, and promptly this real and imaginary part are plural number.The Cayley-Dickson that is q=a+bi+cj+dk can be write as [10]:
q=A+Bj (6)
A=a+bi wherein, B=c+di.
Cayley-Dickson decomposes also the Fundamentals of Mathematics of symplectic decomposition just.Use the complex operator μ of broad sense, if select two pure quaternion μ of unit arbitrarily 1, μ 2And μ 1⊥ μ 2, hypercomplex number arbitrarily can be expressed as the plural form of broad sense, we are referred to as the symplectic form of hypercomplex number:
q=A′+B′μ 2
A=a '+b ' μ wherein 1, B=c ' d ' μ 1Just:
q=(a′+b′μ 1)+(c′+d′μ 12 ((7)
Wherein A is called the simplex part, and B is called the perplex part.Since hypercomplex number symplectic form is present in the complex number space of broad sense, so these two parts all homomorphism in plural number.Top expression formula is taken advantage of out, can be write:
q=a′+b′μ 1+c′μ 2+d′’μ 3 (8)
μ wherein 31μ 2, and μ 3⊥ μ 1, μ 3⊥ μ 2Like this, we have just had the other cover operator μ of homomorphism in standard quaternion operator i, j, k 1, μ 2, μ 3(a ', b ', c ', d ') can obtain by following formula:
a′=S[q]
b ′ = - 1 2 ( V [ q ] μ 1 + μ 1 V [ q ] )
c ′ = - 1 2 ( V [ q ] μ 2 + μ 2 V [ q ] ) - - - ( ( ( 9 )
d ′ = - 1 2 ( V [ q ] μ 1 μ 2 + μ 1 μ 2 V [ q ] )
S[q wherein] be the real part of hypercomplex number q, i.e. S[q]=a; V[q] be the pure imaginary number part of hypercomplex number q, i.e. V[q]=bi+cj+dk.
For the supercomplex model f of coloured image (x, y)=r (x, y) i+g (x, y)+b (x, y) k, thereby obtain:
f(x,y)=f 1(x,y)+f 2(x,y)μ 2( ((10)
Wherein claim f 1(x, y), f 2(x y) is respectively simplex part and perplex part.If μ 1Be gray scale axle, f so 1(x, what y) provide is exactly monochrome information (still the image three-colo(u)r of a RGB is not a simple gray image), and f 2(x, a chrominance information that y) provides (same also is a RGB coloured image).
According to the characteristic of the vector pixel of residual image, to by Qe f(x, y) carrying out supercomplex Symplectic along the direction of gray scale axle decomposes, supercomplex residual error model with full-colour image is replaced the simplex part, so just can realize the recovery of the residual image of low resolution, thereby realizes the fusion of multispectral image and full-colour image.
Embodiment
We use the multispectral image in the area, Shanghai that Landsat 7 ETM+ sensors take on June 14th, 2000 and full-colour image (north latitude 314460.0000N, east longitude 1215360.0000E) and the full-colour image of the SPOT satellite in the Hanoi area taken in October 26 nineteen ninety-five and the multispectral image of TM describe performance of the present invention.Wherein, full-colour image has the spatial resolution of 15m, and multispectral image has the spatial resolution of 30m.
Because Landsat 7 ETM+ and SPOT do not provide the true as a comparison multispectral of 15m resolution, for the true multispectral image with 30m resolution comes comparison, we degenerate to 30m and 60m respectively with full-colour image and multispectral image.Full-colour image and 60m multispectral image to 30m merge, and the multispectral image of result that will merge and 30m resolution compares.
Below by simulating, verifying performance of the present invention.In order to weigh the enhancing of space information in the remote sensing image fusion process, this paper adopts the SDD parameter [11]Its fusion results is estimated, and the SDD parameter is the standard deviation that merges full-colour image and low resolution multispectral image difference, and it is defined as follows:
SDD = 1 MN Σ x Σ y ( ( F ( x , y ) - M S i ( x , y ) ) - ( F ‾ - MS ‾ i ) ) 2
In the formula, the image that F obtains for fusion,
Figure S2007101732929D00055
Average for image pixel.In general, the SDD parameter of fused images is good with the SDD parameter that approaches high-resolution multispectral image, will be in the spatial information that comprises in this moment fused images and also comprised spatial information in the high-resolution multispectral image.If the SDD parameter that merges is bigger than the SDD parameter of high-resolution multispectral image, the too much panchromatic spatial information of possibility is dissolved in the multispectral image and is gone so, causes the change of the spectral characteristic of fused images.
In order to weigh the reservation situation of spectral signature in the remote sensing image fusion process, we adopt following statistical parameter:
1) Y-PSNR (PSNR)
If think fused images F (x, y) with canonical reference image R (x, difference y) is noise, and canonical reference image R (x y) is exactly information.The Y-PSNR PSNR of fused images is defined as [12]
PSNR = 10 × log 10 MN [ max ( F ( x , y ) ) - min ( F ( x , y ) ) ] Σ x Σ y [ R ( x , y ) ) - F ( x , y ) ) ] 2
The unit of Y-PSNR is a decibel (dB).In general, the Y-PSNR that calculates is just big more, illustrates that the spectral signature of fused images and canonical reference image is approaching more, and the effect of fusion is good more.
2) related coefficient (CC)
The related coefficient of fused images F and canonical reference image R can reflect the similarity degree of the spectral signature of two width of cloth images, and it is defined as follows:
cc = Σ x Σ y ( F ( x , y ) - F ‾ ) ( R ( x , y ) - R ‾ ) ( Σ x Σ y ( F ( x , y ) - F ‾ ) 2 ) ( Σ x Σ y R ( x , y ) - R ‾ 2 )
The related coefficient of calculating is big more, illustrates that the similarity degree of spectral signature of fused images and canonical reference image is high more, and the effect of fusion is good more.Y-PSNR and related coefficient are to calculate respectively on each wave band of fused images and high-resolution multi-spectral image.
3) relative global error (ERGAS)
Global error can reflect that the spectrum of fused images on each wave band changes situation relatively, and it is defined as follows [15]:
ERGAS = 100 h l 1 K Σ i = 1 K Σ x Σ y ( F i ( x , y ) - R i ( x , y ) ) 2 Σ x Σ y R i ( x , y )
Wherein, l is the resolution of low resolution multispectral image, and h is the resolution of high-resolution multi-spectral image, and K is the wave band that participates in fusion.The relative global error of calculating is more little, and fused images and canonical reference image are approaching more, and the effect of fusion is just good more.
In the remote sensing image fusion of reality is estimated, take all factors into consideration the SDD parameter of reflection spatial information enhancing and Y-PSNR PSNR, related coefficient CC, the relative global error ERGAS that the reflection spectral information keeps.The remote sensing image fusion method of an optimum not only should improve the spatial resolution of fused images, and requires to keep as much as possible the spectral signature of original image, therefore need average out between above-mentioned two class parameters.
Table 1 has provided the SDD parameter of various fused images.Wherein, the SDD parameter of 30m resolution multi-spectral image is used for doing standard.The SDD parameter of HIS conversion and PCA transform method correspondence is far longer than the SDD parameter of true picture as can be seen from the table, the information that the full-colour image in the multispectral image is dissolved in this explanation surpassed high-resolution multispectral image the spatial information that should comprise.In small wave converting method, the SDD parameter of each wave band is all very approaching, illustrates that the spatial information of being dissolved into the full-colour image on each wave band is similarly, however with the SDD parameter of true picture be differentiated.The present invention proposes the SDD parameter that the pairing SDD parameter of method approaches true picture most, and the SDD parameter distributions rule on each wave band is also similar to the SDD parameter distributions rule on each wave band of true picture, this has illustrated that the present invention can treat with a certain discrimination each wave band of multispectral image when incorporating the spatial information of full-colour image, with realistic situation.
Table 2 has provided various fusion methods at the statistical parameter that keeps on the spectral signature.The fusion method that residual error supercomplex antithesis decomposes all has bigger Y-PSNR PSNR and related coefficient CC on each wave band, illustrate that its spectral signature of fused images and the spectral signature of the multispectral image of 30m resolution are very approaching, the relative global error ERGAS of this method on all wave bands is minimum simultaneously, and this has illustrated the validity of method on the maintenance spectral signature that the present invention proposes.
The statistical parameter of the enhancing spatial information of table 1 fusion results
Parameter Wave band True picture IHS PCA Wavelet transformation Residual error symplectic
SDD R 0.061 0.245 0.195 0.076 0.071
G 0.060 0.253 0.268 0.075 0.069
B 0.051 0.253 0.325 0.075 0.056
The statistical parameter of the maintenance spectral signature of table 2 fusion results
Parameter Wave band IHS PCA Wavelet transformation Residual error symplectic
PSNR R 12.17 14.13 22.62 23.2896
G 11.93 11.46 22.64 24.2447
B 11.95 9.83 23.02. 25.2401
CC R 0.45 0.61 0.92 0.96
G 0.19 0.19 0.93 0.97
B 0.26 0.11 0.95 0.99
ERGAS 22.22 23.15 6.47 2.77
Table 3 and table 4 have provided the analysis result that the multispectral image to the image of SPOT satellite and TM merges respectively.The result shows that also the result that this paper proposes is better than IHS, PCA method and method of wavelet.
The statistical parameter of the enhancing spatial information of table 3 fusion results
Parameter Wave band True picture IHS PCA Wavelet transformation Residual error symplectic
SDD R 0.063 0.092 0.179 0.066 0.065
G 0.062 0.117 0.281 0.065 0.063
B 0.057 0.117 0.279 0.064 0.056
The statistical parameter of the maintenance spectral signature of table 4 fusion results
Parameter Wave band IHS PCA Wavelet transformation Residual error symplectic
PSNR R 20.87 13.49 24.69 27.28
G 18.75 11.37 24.43 26.85
B 18.72 11.42 24.66 26.24
CC R 0.92 0.25 0.95 0.97
G 0.88 0.33 0.96 0.97
B 0.88 0.27 0.95 0.98
ERGAS 11.36 26.6 6.36 4.04
List of references
[1]T.M.Tu,S.C.Su,H.C.Shyu,and P.S.Huang.A new look at HIS-like image fusion methods[J].Inf.Fusion,2001,2(3):177-186
[2]Ye sou H,Besnus Y,Polet Y.Extraction of spectral information from landsat tm data and merger withSPOT panchromatic imagery-A contribution to the study of Geological structures[J].ISPRS Journal ofPhotogrammetry and Remote Sensing,1993,48(5):23-26
[3]Shettigara V.K.A generalized component substitution technique for spatial enhancement of multispectralimages using a higher resolution data set[J].Photogrammetric Engineering and Remote Sens.1992,58(5):561-567
[4]Nunez J,Otazu X,ForsO,et al.Multiresolution based image fusion with additive wavelet decomposition
[J].IEEE Transactions on Geosciences and Remote Sens.1999,37(3):1024-1211
[5]C.E.Moxey,S.J.Sangwine and T.A.Ell.Hypercomplex correlation Techniques for vector image[J].IEEE Trans.Signal Processing,2003,51(7):1941-1953
[6] Yang Huijuan, Zhang Jianqiu and Hu Bo " the multispectral and panchromatic image fusion method of supercomplex principal element weighting " application number: 200610118103.3
[7]C.E.Moxey,S.J.Sangwine and T.A.Ell.Color-grayscale image registration using hypercomplex phasecorrelation[C].IEEE ICIP,2002,385-388.
[8]Fengzhi Pan,Liming Zhang,New image super-resolution scheme based on residual erreo restoration byneural networks[J].Optical Engineering,vol.42,no.10,pp.3038-3046,2003
[9]C E Moxey,S J Sangwine,T A Ell.Vector correlation of color images[C].In l st European Conf.on Colorin Graphics,Imaging and Vision,France:Poitiers,2002,343-347
[10]Todd A.Ell,S.J.Sangwine.Hypercomplex fourier transforms of color images[J].IEEE Trans.SignalProcessing,2007,16(1):22-35
[11]M.Gonzalez-Audican,J.L.Saleta,R.G.Catalan,et al.Fusion of multispectral and panchromatic imagesusing improve HIS and PCA mergers based on wavelet decomposition[J].IEEE Trans.Geosci.Remote Sens.2004,42(6):1291-1299
[12] Wang Haihui, Peng Jiaxiong etc. multi-source remote sensing image syncretizing effect Study on Evaluation [J]. computer engineering and application, 2003,25:33-37

Claims (1)

1. the multispectral and panchromatic image fusion method that decomposes of a residual error supercomplex antithesis is characterized in that concrete steps are as follows:
(1) for multispectral image MS, by interpolation algorithm Z, the multispectral image of low resolution is amplified to the size same with full-colour image, obtain interpolation image I;
(2) interpolation image I is carried out low-pass filtering and down-sampled, the low-resolution image that obtains former multispectral image is estimated MS LEstimate MS with multispectral image MS and image then LSubtract each other the residual image e that obtains multispectral image gFull-colour image MP through same low-pass filtering, is obtained the estimation MP of former full-colour image L, subtract each other with it with former full-colour image again, can obtain the residual image e of full-colour image p
(3) with residual image e gBe amplified to residual image e with full-colour image pSame size obtains e f
(4) to e fWith e behind the vector quantization pWith carrying out the supercomplex modeling, obtain Qe respectively f(x, y) and Qe p(x, y);
(5) to Qe f(x, y) it carries out supercomplex symplectic and decomposes along the direction of gray scale axle, obtains the simplex part f of packet content monochrome information 1(x is y) with the perplex part f that comprises chrominance information 2(x, y);
(6) use Qe p(x y) replaces f 1(x, y), the residual error after being restored: e f' (x, y)=Qe p(x, y)+f 2(x, y) μ 2
(7) obtain fusion results: MS '=I+e at last f'.
CN2007101732929A 2007-12-27 2007-12-27 Residual hypercomplex number dual decomposition multi-light spectrum and full-color image fusion method Expired - Fee Related CN101216557B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2007101732929A CN101216557B (en) 2007-12-27 2007-12-27 Residual hypercomplex number dual decomposition multi-light spectrum and full-color image fusion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2007101732929A CN101216557B (en) 2007-12-27 2007-12-27 Residual hypercomplex number dual decomposition multi-light spectrum and full-color image fusion method

Publications (2)

Publication Number Publication Date
CN101216557A true CN101216557A (en) 2008-07-09
CN101216557B CN101216557B (en) 2011-07-20

Family

ID=39623029

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2007101732929A Expired - Fee Related CN101216557B (en) 2007-12-27 2007-12-27 Residual hypercomplex number dual decomposition multi-light spectrum and full-color image fusion method

Country Status (1)

Country Link
CN (1) CN101216557B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930604A (en) * 2010-09-08 2010-12-29 中国科学院自动化研究所 Infusion method of full-color image and multi-spectral image based on low-frequency correlation analysis
CN102622730A (en) * 2012-03-09 2012-08-01 武汉理工大学 Remote sensing image fusion processing method based on non-subsampled Laplacian pyramid and bi-dimensional empirical mode decomposition (BEMD)
CN105869114A (en) * 2016-03-25 2016-08-17 哈尔滨工业大学 Multispectral image and full-color image fusion method based on multilayer inter-band structural model
CN108414454A (en) * 2018-01-25 2018-08-17 北京农业信息技术研究中心 The synchronized measurement system and measurement method of a kind of plant three-dimensional structure and spectral information
CN109035192A (en) * 2018-08-17 2018-12-18 凌云光技术集团有限责任公司 A kind of visible images and full-colour image synthetic method and device
CN111524079A (en) * 2020-04-22 2020-08-11 四川大学 Multispectral remote sensing image panchromatic sharpening method based on component replacement and low-pass filtering
CN111563866A (en) * 2020-05-07 2020-08-21 重庆三峡学院 Multi-source remote sensing image fusion method
CN112184554A (en) * 2020-10-13 2021-01-05 重庆邮电大学 Remote sensing image fusion method based on residual mixed expansion convolution

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7046841B1 (en) * 2003-08-29 2006-05-16 Aerotec, Llc Method and system for direct classification from three dimensional digital imaging
CN100465661C (en) * 2006-11-09 2009-03-04 复旦大学 Multispectral and panchromatic image fusion method of supercomplex principal element weighting

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930604A (en) * 2010-09-08 2010-12-29 中国科学院自动化研究所 Infusion method of full-color image and multi-spectral image based on low-frequency correlation analysis
CN101930604B (en) * 2010-09-08 2012-03-28 中国科学院自动化研究所 Infusion method of full-color image and multi-spectral image based on low-frequency correlation analysis
CN102622730A (en) * 2012-03-09 2012-08-01 武汉理工大学 Remote sensing image fusion processing method based on non-subsampled Laplacian pyramid and bi-dimensional empirical mode decomposition (BEMD)
CN105869114A (en) * 2016-03-25 2016-08-17 哈尔滨工业大学 Multispectral image and full-color image fusion method based on multilayer inter-band structural model
CN105869114B (en) * 2016-03-25 2018-12-11 哈尔滨工业大学 Multispectral image and panchromatic image fusion method based on multilayer interband structural model
CN108414454A (en) * 2018-01-25 2018-08-17 北京农业信息技术研究中心 The synchronized measurement system and measurement method of a kind of plant three-dimensional structure and spectral information
CN109035192A (en) * 2018-08-17 2018-12-18 凌云光技术集团有限责任公司 A kind of visible images and full-colour image synthetic method and device
CN111524079A (en) * 2020-04-22 2020-08-11 四川大学 Multispectral remote sensing image panchromatic sharpening method based on component replacement and low-pass filtering
CN111524079B (en) * 2020-04-22 2023-06-20 四川大学 Multispectral remote sensing image full-color sharpening method based on component replacement and low-pass filtering
CN111563866A (en) * 2020-05-07 2020-08-21 重庆三峡学院 Multi-source remote sensing image fusion method
CN112184554A (en) * 2020-10-13 2021-01-05 重庆邮电大学 Remote sensing image fusion method based on residual mixed expansion convolution
CN112184554B (en) * 2020-10-13 2022-08-23 重庆邮电大学 Remote sensing image fusion method based on residual mixed expansion convolution

Also Published As

Publication number Publication date
CN101216557B (en) 2011-07-20

Similar Documents

Publication Publication Date Title
CN101216557B (en) Residual hypercomplex number dual decomposition multi-light spectrum and full-color image fusion method
Shao et al. Remote sensing image fusion with deep convolutional neural network
CN110533620B (en) Hyperspectral and full-color image fusion method based on AAE extraction spatial features
Garzelli et al. Optimal MMSE pan sharpening of very high resolution multispectral images
CN108830796B (en) Hyperspectral image super-resolution reconstruction method based on spectral-spatial combination and gradient domain loss
CN109509160A (en) Hierarchical remote sensing image fusion method utilizing layer-by-layer iteration super-resolution
CN111127374B (en) Pan-sharing method based on multi-scale dense network
CN102982517B (en) Remote-sensing image fusion method based on local correlation of light spectrum and space
CN110415199B (en) Multispectral remote sensing image fusion method and device based on residual learning
CN114677300B (en) Method and system for deep noise reduction of hyperspectral image based on double-stage learning framework
CN101140325A (en) Method for enhancing distinguishability cooperated with space-optical spectrum information of high optical spectrum image
CN100465661C (en) Multispectral and panchromatic image fusion method of supercomplex principal element weighting
CN105160647A (en) Panchromatic multi-spectral image fusion method
CN103116881A (en) Remote sensing image fusion method based on PCA (principal component analysis) and Shearlet conversion
CN113724164B (en) Visible light image noise removing method based on fusion reconstruction guidance filtering
CN103679661A (en) Significance analysis based self-adaptive remote sensing image fusion method
CN113793289A (en) Multi-spectral image and panchromatic image fuzzy fusion method based on CNN and NSCT
CN106157269A (en) Full-colour image sharpening method based on direction multiple dimensioned group low-rank decomposition
CN101540039B (en) Method for super resolution of single-frame images
CN113763267A (en) Image restoration method under strong scattering environment based on NSCT image fusion
CN102646267A (en) Degraded image restoration method and system
Sulaiman et al. A robust pan-sharpening scheme for improving resolution of satellite images in the domain of the nonsubsampled shearlet transform
CN115100075B (en) Hyperspectral panchromatic sharpening method based on spectrum constraint and residual attention network
CN1296871C (en) Remote sensitive image fusing method based on residual error
Liu et al. Locally linear detail injection for pansharpening

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110720

Termination date: 20131227