LAT: Local area transform for cross modal correspondence matching

Seungchul Ryu, Seungryong Kim, Kwanghoon Sohn

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

Establishing correspondences is a fundamental task in many image processing and computer vision applications. In particular, finding the correspondences between a non-linearly deformed image pair induced by different modality conditions is a challenging problem. This paper describes a simple but powerful image transform called local area transform (LAT) for modality-robust correspondence estimation. Specifically, LAT transforms an image from the intensity domain to the local area domain, which is invariant under nonlinear intensity deformations, especially radiometric, photometric, and spectral deformations. Experimental results show that LATransformed images provide a consistency for nonlinearly deformed images, even under random intensity deformations. LAT reduces the mean absolute difference by approximately 0.20 and the different pixel ratio by approximately 58% on average, as compared to conventional methods. Furthermore, the reformulation of descriptors with LAT shows superiority to conventional methods, which is a promising result for the tasks of cross-spectral and modality correspondence matching. LAT gains an approximately 23% improvement in the correct detection ratio and a 10% improvement in the recognition rate for the tasks of RGB-NIR cross-spectral template matching and cross-spectral feature matching, respectively. LAT reduces the bad pixel percentage by approximately 15% and the root mean squared errors by 13.5 in the task of cross-radiation stereo matching. LAT also improves the cross-modal dense flow estimation task in terms of warping error, providing 50% error reduction.

Original languageEnglish
Pages (from-to)218-228
Number of pages11
JournalPattern Recognition
Volume63
DOIs
Publication statusPublished - 2017 Mar 1

Fingerprint

Pixels
Template matching
Computer vision
Image processing
Radiation

All Science Journal Classification (ASJC) codes

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Cite this

Ryu, Seungchul ; Kim, Seungryong ; Sohn, Kwanghoon. / LAT : Local area transform for cross modal correspondence matching. In: Pattern Recognition. 2017 ; Vol. 63. pp. 218-228.
@article{6cb387a280484116a47851d504384fd6,
title = "LAT: Local area transform for cross modal correspondence matching",
abstract = "Establishing correspondences is a fundamental task in many image processing and computer vision applications. In particular, finding the correspondences between a non-linearly deformed image pair induced by different modality conditions is a challenging problem. This paper describes a simple but powerful image transform called local area transform (LAT) for modality-robust correspondence estimation. Specifically, LAT transforms an image from the intensity domain to the local area domain, which is invariant under nonlinear intensity deformations, especially radiometric, photometric, and spectral deformations. Experimental results show that LATransformed images provide a consistency for nonlinearly deformed images, even under random intensity deformations. LAT reduces the mean absolute difference by approximately 0.20 and the different pixel ratio by approximately 58{\%} on average, as compared to conventional methods. Furthermore, the reformulation of descriptors with LAT shows superiority to conventional methods, which is a promising result for the tasks of cross-spectral and modality correspondence matching. LAT gains an approximately 23{\%} improvement in the correct detection ratio and a 10{\%} improvement in the recognition rate for the tasks of RGB-NIR cross-spectral template matching and cross-spectral feature matching, respectively. LAT reduces the bad pixel percentage by approximately 15{\%} and the root mean squared errors by 13.5 in the task of cross-radiation stereo matching. LAT also improves the cross-modal dense flow estimation task in terms of warping error, providing 50{\%} error reduction.",
author = "Seungchul Ryu and Seungryong Kim and Kwanghoon Sohn",
year = "2017",
month = "3",
day = "1",
doi = "10.1016/j.patcog.2016.10.006",
language = "English",
volume = "63",
pages = "218--228",
journal = "Pattern Recognition",
issn = "0031-3203",
publisher = "Elsevier Limited",

}

LAT : Local area transform for cross modal correspondence matching. / Ryu, Seungchul; Kim, Seungryong; Sohn, Kwanghoon.

In: Pattern Recognition, Vol. 63, 01.03.2017, p. 218-228.

Research output: Contribution to journalArticle

TY - JOUR

T1 - LAT

T2 - Local area transform for cross modal correspondence matching

AU - Ryu, Seungchul

AU - Kim, Seungryong

AU - Sohn, Kwanghoon

PY - 2017/3/1

Y1 - 2017/3/1

N2 - Establishing correspondences is a fundamental task in many image processing and computer vision applications. In particular, finding the correspondences between a non-linearly deformed image pair induced by different modality conditions is a challenging problem. This paper describes a simple but powerful image transform called local area transform (LAT) for modality-robust correspondence estimation. Specifically, LAT transforms an image from the intensity domain to the local area domain, which is invariant under nonlinear intensity deformations, especially radiometric, photometric, and spectral deformations. Experimental results show that LATransformed images provide a consistency for nonlinearly deformed images, even under random intensity deformations. LAT reduces the mean absolute difference by approximately 0.20 and the different pixel ratio by approximately 58% on average, as compared to conventional methods. Furthermore, the reformulation of descriptors with LAT shows superiority to conventional methods, which is a promising result for the tasks of cross-spectral and modality correspondence matching. LAT gains an approximately 23% improvement in the correct detection ratio and a 10% improvement in the recognition rate for the tasks of RGB-NIR cross-spectral template matching and cross-spectral feature matching, respectively. LAT reduces the bad pixel percentage by approximately 15% and the root mean squared errors by 13.5 in the task of cross-radiation stereo matching. LAT also improves the cross-modal dense flow estimation task in terms of warping error, providing 50% error reduction.

AB - Establishing correspondences is a fundamental task in many image processing and computer vision applications. In particular, finding the correspondences between a non-linearly deformed image pair induced by different modality conditions is a challenging problem. This paper describes a simple but powerful image transform called local area transform (LAT) for modality-robust correspondence estimation. Specifically, LAT transforms an image from the intensity domain to the local area domain, which is invariant under nonlinear intensity deformations, especially radiometric, photometric, and spectral deformations. Experimental results show that LATransformed images provide a consistency for nonlinearly deformed images, even under random intensity deformations. LAT reduces the mean absolute difference by approximately 0.20 and the different pixel ratio by approximately 58% on average, as compared to conventional methods. Furthermore, the reformulation of descriptors with LAT shows superiority to conventional methods, which is a promising result for the tasks of cross-spectral and modality correspondence matching. LAT gains an approximately 23% improvement in the correct detection ratio and a 10% improvement in the recognition rate for the tasks of RGB-NIR cross-spectral template matching and cross-spectral feature matching, respectively. LAT reduces the bad pixel percentage by approximately 15% and the root mean squared errors by 13.5 in the task of cross-radiation stereo matching. LAT also improves the cross-modal dense flow estimation task in terms of warping error, providing 50% error reduction.

UR - http://www.scopus.com/inward/record.url?scp=84998910149&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84998910149&partnerID=8YFLogxK

U2 - 10.1016/j.patcog.2016.10.006

DO - 10.1016/j.patcog.2016.10.006

M3 - Article

AN - SCOPUS:84998910149

VL - 63

SP - 218

EP - 228

JO - Pattern Recognition

JF - Pattern Recognition

SN - 0031-3203

ER -