TY - JOUR
T1 - 2.5D visual relationship detection
AU - Su, Yu Chuan
AU - Changpinyo, Soravit
AU - Chen, Xiangning
AU - Thoppay, Sathish
AU - Hsieh, Cho Jui
AU - Shapira, Lior
AU - Soricut, Radu
AU - Adam, Hartwig
AU - Brown, Matthew
AU - Yang, Ming Hsuan
AU - Gong, Boqing
N1 - Publisher Copyright:
© 2022 Elsevier Inc.
PY - 2022/11
Y1 - 2022/11
N2 - Visual 2.5D perception involves understanding the semantics and geometry of a scene through reasoning about object relationships with respect to the viewer. However, existing works in visual recognition primarily focus on the semantics. To bridge this gap, we study 2.5D visual relationship detection (2.5VRD), in which the goal is to jointly detect objects and predict their relative depth and occlusion relationships. Unlike general VRD, 2.5VRD is egocentric, using the camera's viewpoint as a common reference for all 2.5D relationships. Unlike depth estimation, 2.5VRD is object-centric and does not only focus on depth. To enable progress on this task, we construct a new dataset consisting of 220K human-annotated 2.5D relationships among 512K objects from 11K images. We analyze this dataset and conduct extensive experiments including benchmarking multiple state-of-the-art VRD models on this task. Experimental results show that existing models largely rely on semantic cues and simple heuristics to solve 2.5VRD, motivating further research on models for 2.5D perception. We will make our dataset and source code publicly available.
AB - Visual 2.5D perception involves understanding the semantics and geometry of a scene through reasoning about object relationships with respect to the viewer. However, existing works in visual recognition primarily focus on the semantics. To bridge this gap, we study 2.5D visual relationship detection (2.5VRD), in which the goal is to jointly detect objects and predict their relative depth and occlusion relationships. Unlike general VRD, 2.5VRD is egocentric, using the camera's viewpoint as a common reference for all 2.5D relationships. Unlike depth estimation, 2.5VRD is object-centric and does not only focus on depth. To enable progress on this task, we construct a new dataset consisting of 220K human-annotated 2.5D relationships among 512K objects from 11K images. We analyze this dataset and conduct extensive experiments including benchmarking multiple state-of-the-art VRD models on this task. Experimental results show that existing models largely rely on semantic cues and simple heuristics to solve 2.5VRD, motivating further research on models for 2.5D perception. We will make our dataset and source code publicly available.
UR - http://www.scopus.com/inward/record.url?scp=85139001566&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85139001566&partnerID=8YFLogxK
U2 - 10.1016/j.cviu.2022.103557
DO - 10.1016/j.cviu.2022.103557
M3 - Article
AN - SCOPUS:85139001566
SN - 1077-3142
VL - 224
JO - Computer Vision and Image Understanding
JF - Computer Vision and Image Understanding
M1 - 103557
ER -