TY - JOUR
T1 - Recovering Translucent Objects Using a Single Time-of-Flight Depth Camera
AU - Shim, Hyunjung
AU - Lee, Seungkyu
N1 - Publisher Copyright:
© 2015 IEEE.
Copyright:
Copyright 2018 Elsevier B.V., All rights reserved.
PY - 2016/5
Y1 - 2016/5
N2 - Translucency introduces great challenges to 3-D acquisition because of complicated light behaviors such as refraction and transmittance. In this paper, we describe the development of a unified 3-D data acquisition framework that reconstructs translucent objects using a single commercial time-of-flight (ToF) camera. In our capture scenario, we record a depth map and intensity image of the scene twice using a static ToF camera; first, we capture the depth map and intensity image of an arbitrary background, and then we position the translucent foreground object and record a second depth map and intensity image with both the foreground and the background. As a result of material characteristics, the translucent object yields systematic distortions in the depth map. We developed a new distance representation that interprets the depth distortion induced as a result of translucency. By analyzing ToF depth sensing principles, we constructed a distance model governed by the level of translucency, foreground depth, and background depth. Using an analysis-by-synthesis approach, we can recover the 3-D geometry of a translucent object from a pair of depth maps and their intensity images. Extensive evaluation and case studies demonstrate that our method is effective for modeling the nonlinear depth distortion due to translucency and for reconstruction of a 3-D translucent object.
AB - Translucency introduces great challenges to 3-D acquisition because of complicated light behaviors such as refraction and transmittance. In this paper, we describe the development of a unified 3-D data acquisition framework that reconstructs translucent objects using a single commercial time-of-flight (ToF) camera. In our capture scenario, we record a depth map and intensity image of the scene twice using a static ToF camera; first, we capture the depth map and intensity image of an arbitrary background, and then we position the translucent foreground object and record a second depth map and intensity image with both the foreground and the background. As a result of material characteristics, the translucent object yields systematic distortions in the depth map. We developed a new distance representation that interprets the depth distortion induced as a result of translucency. By analyzing ToF depth sensing principles, we constructed a distance model governed by the level of translucency, foreground depth, and background depth. Using an analysis-by-synthesis approach, we can recover the 3-D geometry of a translucent object from a pair of depth maps and their intensity images. Extensive evaluation and case studies demonstrate that our method is effective for modeling the nonlinear depth distortion due to translucency and for reconstruction of a 3-D translucent object.
UR - http://www.scopus.com/inward/record.url?scp=84969796266&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84969796266&partnerID=8YFLogxK
U2 - 10.1109/TCSVT.2015.2397231
DO - 10.1109/TCSVT.2015.2397231
M3 - Article
AN - SCOPUS:84969796266
VL - 26
SP - 841
EP - 854
JO - IEEE Transactions on Circuits and Systems for Video Technology
JF - IEEE Transactions on Circuits and Systems for Video Technology
SN - 1051-8215
IS - 5
M1 - 7029010
ER -