Translucency introduces great challenges to 3-D acquisition because of complicated light behaviors such as refraction and transmittance. In this paper, we describe the development of a unified 3-D data acquisition framework that reconstructs translucent objects using a single commercial time-of-flight (ToF) camera. In our capture scenario, we record a depth map and intensity image of the scene twice using a static ToF camera; first, we capture the depth map and intensity image of an arbitrary background, and then we position the translucent foreground object and record a second depth map and intensity image with both the foreground and the background. As a result of material characteristics, the translucent object yields systematic distortions in the depth map. We developed a new distance representation that interprets the depth distortion induced as a result of translucency. By analyzing ToF depth sensing principles, we constructed a distance model governed by the level of translucency, foreground depth, and background depth. Using an analysis-by-synthesis approach, we can recover the 3-D geometry of a translucent object from a pair of depth maps and their intensity images. Extensive evaluation and case studies demonstrate that our method is effective for modeling the nonlinear depth distortion due to translucency and for reconstruction of a 3-D translucent object.
|Number of pages||14|
|Journal||IEEE Transactions on Circuits and Systems for Video Technology|
|Publication status||Published - 2016 May|
Bibliographical notePublisher Copyright:
© 2015 IEEE.
All Science Journal Classification (ASJC) codes
- Media Technology
- Electrical and Electronic Engineering