Superpixel segmentation has been widely used in many computer vision tasks. Existing superpixel algorithms are mainly based on hand-crafted features, which often fail to preserve weak object boundaries. In this work, we leverage deep neural networks to facilitate extracting superpixels from images. We show a simple integration of deep features with existing superpixel algorithms does not result in better performance as these features do not model segmentation. Instead, we propose a segmentation-aware affinity learning approach for superpixel segmentation. Specifically, we propose a new loss function that takes the segmentation error into account for affinity learning. We also develop the Pixel Affinity Net for affinity prediction. Extensive experimental results show that the proposed algorithm based on the learned segmentation-aware loss performs favorably against the state-of-the-art methods. We also demonstrate the use of the learned superpixels in numerous vision applications with consistent improvements.
|Title of host publication||Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018|
|Publisher||IEEE Computer Society|
|Number of pages||9|
|Publication status||Published - 2018 Dec 14|
|Event||31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018 - Salt Lake City, United States|
Duration: 2018 Jun 18 → 2018 Jun 22
|Name||Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition|
|Conference||31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018|
|City||Salt Lake City|
|Period||18/6/18 → 18/6/22|
Bibliographical noteFunding Information:
M.-H. Yang is supported in part by NSF CAREER (No. 1149783) and gifts from Adobe, Toyota, Panasonic, Samsung, NEC, Verisk, and NVidia.
All Science Journal Classification (ASJC) codes
- Computer Vision and Pattern Recognition