Learning to Find Unpaired Cross-Spectral Correspondences

Somi Jeong, Seungryong Kim, Kihong Park, Kwanghoon Sohn

Research output: Contribution to journalArticle

2 Citations (Scopus)


We present a deep architecture and learning framework for establishing correspondences across cross-spectral visible and infrared images in an unpaired setting. To overcome the unpaired cross-spectral data problem, we design the unified image translation and feature extraction modules to be learned in a joint and boosting manner. Concretely, the image translation module is learned only with the unpaired cross-spectral data, and the feature extraction module is learned with an input image and its translated image. By learning two modules simultaneously, the image translation module generates the translated image that preserves not only the domain-specific attributes with separate latent spaces but also the domain-agnostic contents with feature consistency constraint. In an inference phase, the cross-spectral feature similarity is augmented by intra-spectral similarities between the features extracted from the translated images. Experimental results show that this model outperforms the state-of-the-art unpaired image translation methods and cross-spectral feature descriptors on various visible and infrared benchmarks.

Original languageEnglish
Article number8721725
Pages (from-to)5394-5406
Number of pages13
JournalIEEE Transactions on Image Processing
Issue number11
Publication statusPublished - 2019 Nov

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Graphics and Computer-Aided Design

Fingerprint Dive into the research topics of 'Learning to Find Unpaired Cross-Spectral Correspondences'. Together they form a unique fingerprint.

  • Cite this