Abstract
Contextual detection not only uses visual features, but also leverages contextual information from the scene in the image. Most conventional context based methods have heavy training cost or large dependence on the original baseline detector. To overcome such shortcomings, we propose a new method based on co-occurrence context. It is built upon recent off-the-shelf baseline detector and achieves higher accuracy than existing works while detecting additional true positives which the baseline detector could not find. Furthermore we construct an indoor specific NYUv2-context dataset to investigate context-based detection of indoor objects. It is a subset of original NYU-depth-v2 dataset and to be published online to encourage context researches. In the experiment, the proposed method obtained 21.22% mAP which outperforms the baseline and compared context-based work by 0.91 and 0.36 percentage point mAP respectively.
Original language | English |
---|---|
Pages (from-to) | 56-61 |
Number of pages | 6 |
Journal | Pattern Recognition Letters |
Volume | 86 |
DOIs | |
Publication status | Published - 2017 Jan 15 |
Bibliographical note
Publisher Copyright:© 2016 Elsevier B.V.
All Science Journal Classification (ASJC) codes
- Software
- Signal Processing
- Computer Vision and Pattern Recognition
- Artificial Intelligence