Abstract
We present a scene parsing method that utilizes global context information based on both the parametric and nonparametric models. Compared to previous methods that only exploit the local relationship between objects, we train a context network based on scene similarities to generate feature representations for global contexts. In addition, these learned features are utilized to generate global and spatial priors for explicit classes inference. We then design modules to embed the feature representations and the priors into the segmentation network as additional global context cues. We show that the proposed method can eliminate false positives that are not compatible with the global context representations. Experiments on both the MIT ADE20K and PASCAL Context datasets show that the proposed method performs favorably against existing methods.
Original language | English |
---|---|
Title of host publication | Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 2650-2658 |
Number of pages | 9 |
ISBN (Electronic) | 9781538610329 |
DOIs | |
Publication status | Published - 2017 Dec 22 |
Event | 16th IEEE International Conference on Computer Vision, ICCV 2017 - Venice, Italy Duration: 2017 Oct 22 → 2017 Oct 29 |
Publication series
Name | Proceedings of the IEEE International Conference on Computer Vision |
---|---|
Volume | 2017-October |
ISSN (Print) | 1550-5499 |
Other
Other | 16th IEEE International Conference on Computer Vision, ICCV 2017 |
---|---|
Country/Territory | Italy |
City | Venice |
Period | 17/10/22 → 17/10/29 |
Bibliographical note
Funding Information:This work is supported in part by the NSF CAREER Grant #1149783, gifts from Adobe and NVIDIA.
Publisher Copyright:
© 2017 IEEE.
All Science Journal Classification (ASJC) codes
- Software
- Computer Vision and Pattern Recognition