TY - GEN
T1 - DCTM
T2 - 16th IEEE International Conference on Computer Vision, ICCV 2017
AU - Kim, Seungryong
AU - Min, Dongbo
AU - Lin, Stephen
AU - Sohn, Kwanghoon
N1 - Publisher Copyright:
© 2017 IEEE.
Copyright:
Copyright 2018 Elsevier B.V., All rights reserved.
PY - 2017/12/22
Y1 - 2017/12/22
N2 - Techniques for dense semantic correspondence have provided limited ability to deal with the geometric variations that commonly exist between semantically similar images. While variations due to scale and rotation have been examined, there is a lack of practical solutions for more complex deformations such as affine transformations because of the tremendous size of the associated solution space. To address this problem, we present a discrete-continuous transformation matching (DCTM) framework where dense affine transformation fields are inferred through a discrete label optimization in which the labels are iteratively updated via continuous regularization. In this way, our approach draws solutions from the continuous space of affine transformations in a manner that can be computed efficiently through constant-time edge-aware filtering and a proposed affine-varying CNN-based descriptor. Experimental results show that this model outperforms the state-of-the-art methods for dense semantic correspondence on various benchmarks.
AB - Techniques for dense semantic correspondence have provided limited ability to deal with the geometric variations that commonly exist between semantically similar images. While variations due to scale and rotation have been examined, there is a lack of practical solutions for more complex deformations such as affine transformations because of the tremendous size of the associated solution space. To address this problem, we present a discrete-continuous transformation matching (DCTM) framework where dense affine transformation fields are inferred through a discrete label optimization in which the labels are iteratively updated via continuous regularization. In this way, our approach draws solutions from the continuous space of affine transformations in a manner that can be computed efficiently through constant-time edge-aware filtering and a proposed affine-varying CNN-based descriptor. Experimental results show that this model outperforms the state-of-the-art methods for dense semantic correspondence on various benchmarks.
UR - http://www.scopus.com/inward/record.url?scp=85041900769&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85041900769&partnerID=8YFLogxK
U2 - 10.1109/ICCV.2017.485
DO - 10.1109/ICCV.2017.485
M3 - Conference contribution
AN - SCOPUS:85041900769
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 4539
EP - 4548
BT - Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 22 October 2017 through 29 October 2017
ER -