Abstract
Convolutional neural networks (CNNs) have facilitated substantial progress on various problems in computer vision and image processing. However, applying them to image fusion has remained challenging due to the lack of the labelled data for supervised learning. This paper introduces a deep image fusion network (DIF-Net), an unsupervised deep learning framework for image fusion. The DIF-Net parameterizes the entire processes of image fusion, comprising of feature extraction, feature fusion, and image reconstruction, using a CNN. The purpose of DIF-Net is to generate an output image which has an identical contrast to high-dimensional input images. To realize this, we propose an unsupervised loss function using the structure tensor representation of the multi-channel image contrasts. Different from traditional fusion methods that involve time-consuming optimization or iterative procedures to obtain the results, our loss function is minimized by a stochastic deep learning solver with large-scale examples. Consequently, the proposed method can produce fused images that preserve source image details through a single forward network trained without reference ground-truth labels. The proposed method has broad applicability to various image fusion problems, including multi-spectral, multi-focus, and multi-exposure image fusions. Quantitative and qualitative evaluations show that the proposed technique outperforms existing state-of-the-art approaches for various applications.
Original language | English |
---|---|
Article number | 8962327 |
Pages (from-to) | 3845-3858 |
Number of pages | 14 |
Journal | IEEE Transactions on Image Processing |
Volume | 29 |
DOIs | |
Publication status | Published - 2020 |
Bibliographical note
Funding Information:Manuscript received April 16, 2019; revised October 22, 2019 and January 3, 2020; accepted January 3, 2020. Date of publication January 17, 2020; date of current version January 31, 2020. This work was supported by Research and Development Program for Advanced Integrated-Intelligence for Identification (AIID) through the National Research Foundation of KOREA(NRF) funded by Ministry of Science and ICT under Grant NRF-2018M3E3A1057289. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Giacomo Boracchi. (Corresponding author: Kwanghoon Sohn.) Hyungjoo Jung and Kwanghoon Sohn are with the School of Electrical and Electronic Engineering, Yonsei University, Seoul 120-749, South Korea (e-mail: coolguy0220@yonsei.ac.kr; khsohn@yonsei.ac.kr).
Publisher Copyright:
© 1992-2012 IEEE.
All Science Journal Classification (ASJC) codes
- Software
- Computer Graphics and Computer-Aided Design