Techniques for dense semantic correspondence have provided limited ability to deal with the geometric variations that commonly exist between semantically similar images. While variations due to scale and rotation have been examined, there is a lack of practical solutions for more complex deformations such as affine transformations because of the tremendous size of the associated solution space. To address this problem, we present a discrete-continuous transformation matching (DCTM) framework where dense affine transformation fields are inferred through a discrete label optimization in which the labels are iteratively updated via continuous regularization. In this way, our approach draws solutions from the continuous space of affine transformations in a manner that can be computed efficiently through constant-time edge-aware filtering and a proposed affine-varying CNN-based descriptor. Experimental results show that this model outperforms the state-of-the-art methods for dense semantic correspondence on various benchmarks.
|Title of host publication||Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||10|
|Publication status||Published - 2017 Dec 22|
|Event||16th IEEE International Conference on Computer Vision, ICCV 2017 - Venice, Italy|
Duration: 2017 Oct 22 → 2017 Oct 29
|Name||Proceedings of the IEEE International Conference on Computer Vision|
|Other||16th IEEE International Conference on Computer Vision, ICCV 2017|
|Period||17/10/22 → 17/10/29|
Bibliographical notePublisher Copyright:
© 2017 IEEE.
All Science Journal Classification (ASJC) codes
- Computer Vision and Pattern Recognition