Establishing dense visual correspondence between multiple images is a fundamental task in many applications of computer vision and computational photography. Classical approaches, which aim to estimate dense stereo and optical flow fields for images adjacent in viewpoint or in time, have been dramatically advanced in recent studies. However, finding reliable visual correspondence in multi-modal or multi-spectral images still remains unsolved. In this paper, we propose a novel dense matching descriptor, called dense adaptive self-correlation (DASC), to effectively address this kind of matching scenarios. Based on the observation that a self-similarity existing within images is less sensitive to modality variations, we define the descriptor with a series of an adaptive self-correlation similarity for patches within a local support window. To further improve the matching quality and runtime efficiency, we propose a randomized receptive field pooling, in which a sampling pattern is optimized with a discriminative learning. Moreover, the computational redundancy that arises when computing densely sampled descriptor over an entire image is dramatically reduced by applying fast edge-aware filtering. Experiments demonstrate the outstanding performance of the DASC descriptor in many cases of multi-modal and multi-spectral correspondence.
|Title of host publication||IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015|
|Publisher||IEEE Computer Society|
|Number of pages||10|
|Publication status||Published - 2015 Oct 14|
|Event||IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015 - Boston, United States|
Duration: 2015 Jun 7 → 2015 Jun 12
|Name||Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition|
|Other||IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015|
|Period||15/6/7 → 15/6/12|
Bibliographical notePublisher Copyright:
© 2015 IEEE.
All Science Journal Classification (ASJC) codes
- Computer Vision and Pattern Recognition