Remote sensing image fusion via wavelet transform and sparse representationby Jian Cheng, Haijun Liu, Ting Liu, Feng Wang, Hongsheng Li

ISPRS Journal of Photogrammetry and Remote Sensing

Similar

Fusion of Remote Sensing Images Using Contourlet Transform

Authors:
Aboubaker M. ALEjaily, Ibrahim A. El Rube, Mohab A. Mangoud
2008

MODIS images super-resolution algorithm via sparse representation

Authors:
Yue Pang, Lingjia Gu, Ruizhi Ren, Jian Sun
2014

Text

t t she a, C

Received in revised form 16 February 2015

Accepted 27 February 2015

Keywords:

Remote sensing image fusion

Wavelet transform

Sparse representation

Training dictionary olution. Firstly, intensity-hue-saturation (IHS) transform is applied to Multi-Spectral (MS) images. spectral resolution but low spatial resolution, and the

Panchromatic (Pan) image with high spatial resolution but low spectral resolution. Since the two types of images respectively provide enough and even redundant information for these two types of resolutions, an image with both high spatial and spectral resolutions could be created by fusing the corresponding MS methods, componen t al., 2001 et al., 2007; Choi et al., 2008) method has been most wide in practical applications. However, color distortion becomes mon problem of the IHS based techniques when fusing Pan a image pairs whose correlations between them are very low.

Examples of these image pairs include the ones obtained by the

IKONOS and QuickBird satellites. Rahmani et al. (2010) proposed an adaptive IHS (AIHS) method incorporating image adaptive coefficients and an edge adaptive technique, which could produce images with higher spectral resolution while maintaining the highquality spatial resolution compared to the original IHS. The ⇑ Corresponding author. Tel./fax: +86 028 61830064.

E-mail address: justus.cheng@gmail.com (J. Cheng).

ISPRS Journal of Photogrammetry and Remote Sensing 104 (2015) 158–173

Contents lists availab

ISPRS Journal of Photogramm lshigh spatial information is needed (Pohl and van Genderen, 1998). However, most optical satellites provide image data with spectral and spatial resolutions separately. For instance, the

QuickBird satellite provides Multi-Spectral (MS) images with high into two categories.

The first category is the spatial domain based directly fuse source images with the intensity intensity-hue-saturation (IHS) transform (Tu ehttp://dx.doi.org/10.1016/j.isprsjprs.2015.02.015 0924-2716/ 2015 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.which ts. The ; Ling ly used a comnd MSOptical remote sensors in satellites can provide images with extensive uses in land cover classification (Fauvel et al., 2006), natural disaster and environment monitoring (Melgani and Bazi, 2006), land change detection (Du et al., 2007), etc. There are many circumstances where a single image with both high spectral and ity to perceive and understand remote sensing data. It has also been shown that fused images could improve the accuracy of remote sensing image classification and object recognition (Pohl and van Genderen, 1998; Simone et al., 2002).

Various methods have been proposed to fuse the co-registered

MS and Pan image pairs. In general, these methods can be classified1. IntroductionThen, wavelet transform is used to the intensity component of MS images and the Panchromatic (Pan) image to construct the multi-scale representation respectively. With the multi-scale representation, different fusion strategies are taken on the low-frequency and the high-frequency sub-images. Sparse representation with training dictionary is introduced into the low-frequency sub-image fusion. The fusion rule for the sparse representation coefficients of the low-frequency sub-images is defined by the spatial frequency maximum. For high-frequency sub-images with prolific detail information, the fusion rule is established by the images information fusion measurement indicator. Finally, the fused results are obtained through inverse wavelet transform and inverse IHS transform. The wavelet transform has the ability to extract the spectral information and the global spatial details from the original pairwise images, while sparse representation can extract the local structures of images effectively. Therefore, our proposed fusion method can well preserve the spectral information and the spatial detail information of the original images. The experimental results on the remote sensing images have demonstrated that our proposed method could well maintain the spectral characteristics of fusion images with a high spatial resolution.  2015 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier

B.V. All rights reserved. images and Pan image. The fused imagery promotes human’s abil-Article history:

Received 28 September 2014

In this paper, we propose a remote sensing image fusion method which combines the wavelet transform and sparse representation to obtain fusion images with high spectral resolution and high spatial res-Remote sensing image fusion via wavele representation

Jian Cheng ⇑, Haijun Liu, Ting Liu, Feng Wang, Hong

School of Electronic Engineering, University of Electronic Science and Technology of Chin a r t i c l e i n f o a b s t r a c t journal homepage: www.eransform and sparse ng Li hina le at ScienceDirect etry and Remote Sensing evier .com/ locate/ isprs jprs etrysmoothing filter-based intensity modulation (SFIM) (Liu, 2000) is one of the most frequently employed approaches in practice to control the trade off between the spatial and spectral information.

It preserves more spectral information but suffers more spatial information loss. The high pass filtering (HPF) (Chen and Chen, 2006) method injects spatial features extracted by high-pass filter into MS images. It ignores much texture information from the Pan image, therefore its effectiveness heavily depends on the filter design.

The methods in the second category are based on the multiscale representation of images, such as wavelet transform (WT), a trous wavelet transform (AWT) and contourlet transform (CT), etc. These transforms can provide a coherent framework for the multi-scale representation of remote sensing images. The wavelet transform (WT) based methods (Li et al., 1995; Maria et al., 2004; Pajares and Manuel de la Cruz, 2004; Amolins et al., 2007) extract spatial information from a high-resolution Pan image firstly, then inject the spatial information into the MS bands to improve the spatial resolution and reduce the color distortion.