Learning to Find Unpaired Cross-Spectral Correspondences

Somi Jeong, Seungryong Kim, Kihong Park, Kwanghoon Sohn

Research output: Contribution to journalArticle

Abstract

We present a deep architecture and learning framework for establishing correspondences across cross-spectral visible and infrared images in an unpaired setting. To overcome the unpaired cross-spectral data problem, we design the unified image translation and feature extraction modules to be learned in a joint and boosting manner. Concretely, the image translation module is learned only with the unpaired cross-spectral data, and the feature extraction module is learned with an input image and its translated image. By learning two modules simultaneously, the image translation module generates the translated image that preserves not only the domain-specific attributes with separate latent spaces but also the domain-agnostic contents with feature consistency constraint. In an inference phase, the cross-spectral feature similarity is augmented by intra-spectral similarities between the features extracted from the translated images. Experimental results show that this model outperforms the state-of-the-art unpaired image translation methods and cross-spectral feature descriptors on various visible and infrared benchmarks.

Original languageEnglish
Article number8721725
Pages (from-to)5394-5406
Number of pages13
JournalIEEE Transactions on Image Processing
Volume28
Issue number11
DOIs
Publication statusPublished - 2019 Nov

Fingerprint

Feature extraction
Infrared radiation

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Graphics and Computer-Aided Design

Cite this

Jeong, Somi ; Kim, Seungryong ; Park, Kihong ; Sohn, Kwanghoon. / Learning to Find Unpaired Cross-Spectral Correspondences. In: IEEE Transactions on Image Processing. 2019 ; Vol. 28, No. 11. pp. 5394-5406.
@article{ac974d12495f4d2881ecf0a4c28334d7,
title = "Learning to Find Unpaired Cross-Spectral Correspondences",
abstract = "We present a deep architecture and learning framework for establishing correspondences across cross-spectral visible and infrared images in an unpaired setting. To overcome the unpaired cross-spectral data problem, we design the unified image translation and feature extraction modules to be learned in a joint and boosting manner. Concretely, the image translation module is learned only with the unpaired cross-spectral data, and the feature extraction module is learned with an input image and its translated image. By learning two modules simultaneously, the image translation module generates the translated image that preserves not only the domain-specific attributes with separate latent spaces but also the domain-agnostic contents with feature consistency constraint. In an inference phase, the cross-spectral feature similarity is augmented by intra-spectral similarities between the features extracted from the translated images. Experimental results show that this model outperforms the state-of-the-art unpaired image translation methods and cross-spectral feature descriptors on various visible and infrared benchmarks.",
author = "Somi Jeong and Seungryong Kim and Kihong Park and Kwanghoon Sohn",
year = "2019",
month = "11",
doi = "10.1109/TIP.2019.2917864",
language = "English",
volume = "28",
pages = "5394--5406",
journal = "IEEE Transactions on Image Processing",
issn = "1057-7149",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "11",

}

Learning to Find Unpaired Cross-Spectral Correspondences. / Jeong, Somi; Kim, Seungryong; Park, Kihong; Sohn, Kwanghoon.

In: IEEE Transactions on Image Processing, Vol. 28, No. 11, 8721725, 11.2019, p. 5394-5406.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Learning to Find Unpaired Cross-Spectral Correspondences

AU - Jeong, Somi

AU - Kim, Seungryong

AU - Park, Kihong

AU - Sohn, Kwanghoon

PY - 2019/11

Y1 - 2019/11

N2 - We present a deep architecture and learning framework for establishing correspondences across cross-spectral visible and infrared images in an unpaired setting. To overcome the unpaired cross-spectral data problem, we design the unified image translation and feature extraction modules to be learned in a joint and boosting manner. Concretely, the image translation module is learned only with the unpaired cross-spectral data, and the feature extraction module is learned with an input image and its translated image. By learning two modules simultaneously, the image translation module generates the translated image that preserves not only the domain-specific attributes with separate latent spaces but also the domain-agnostic contents with feature consistency constraint. In an inference phase, the cross-spectral feature similarity is augmented by intra-spectral similarities between the features extracted from the translated images. Experimental results show that this model outperforms the state-of-the-art unpaired image translation methods and cross-spectral feature descriptors on various visible and infrared benchmarks.

AB - We present a deep architecture and learning framework for establishing correspondences across cross-spectral visible and infrared images in an unpaired setting. To overcome the unpaired cross-spectral data problem, we design the unified image translation and feature extraction modules to be learned in a joint and boosting manner. Concretely, the image translation module is learned only with the unpaired cross-spectral data, and the feature extraction module is learned with an input image and its translated image. By learning two modules simultaneously, the image translation module generates the translated image that preserves not only the domain-specific attributes with separate latent spaces but also the domain-agnostic contents with feature consistency constraint. In an inference phase, the cross-spectral feature similarity is augmented by intra-spectral similarities between the features extracted from the translated images. Experimental results show that this model outperforms the state-of-the-art unpaired image translation methods and cross-spectral feature descriptors on various visible and infrared benchmarks.

UR - http://www.scopus.com/inward/record.url?scp=85071470066&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85071470066&partnerID=8YFLogxK

U2 - 10.1109/TIP.2019.2917864

DO - 10.1109/TIP.2019.2917864

M3 - Article

C2 - 31144636

AN - SCOPUS:85071470066

VL - 28

SP - 5394

EP - 5406

JO - IEEE Transactions on Image Processing

JF - IEEE Transactions on Image Processing

SN - 1057-7149

IS - 11

M1 - 8721725

ER -