Due to the performance limit of remote sensing systems, multispectral images have limited spatial resolution. Their spatial resolution can be improved by merging them with higher resolution image data. A fundamental problem frequently occurring in existing fusion processes, however, is the distortion of spectral information. This paper presents a spatially adaptive image fusion algorithm which produces visually natural images and retains the quality of local spectral information as well. High frequency information of the high resolution image to be inserted to the resampled multispectral images is controlled by adaptive gains to incorporate the difference of local spectral characteristics between the high and the low resolution images into the fusion. Each gain is estimated to minimize the l2-norm of the error between the original and the estimated pixel values defined in a spatially adaptive window of which the weights are proportional to the spectral correlation measurements of the corresponding regions. This method is applied to a set of co-registered Landsat 7 Enhanced Thematic Mapper (ETM) + panchromatic and multispectral image data. The experimental results show that high resolution images can be synthesized by the proposed method, which successfully preserves spectral content of the multispectral images.
Bibliographical noteFunding Information:
This work was supported in part by Korea Science and Engineering Foundation (KOSEF) through Biometrics Engineering Research Center (BERC) at Yonsei University and Information Technology Research Center (ITRC) through IT SOC Research Center at Yonsei University.
All Science Journal Classification (ASJC) codes
- Earth and Planetary Sciences(all)