Recovering Translucent Objects Using a Single Time-of-Flight Depth Camera

Hyunjung Shim, Seungkyu Lee

Research output: Contribution to journalArticle

12 Citations (Scopus)

Abstract

Translucency introduces great challenges to 3-D acquisition because of complicated light behaviors such as refraction and transmittance. In this paper, we describe the development of a unified 3-D data acquisition framework that reconstructs translucent objects using a single commercial time-of-flight (ToF) camera. In our capture scenario, we record a depth map and intensity image of the scene twice using a static ToF camera; first, we capture the depth map and intensity image of an arbitrary background, and then we position the translucent foreground object and record a second depth map and intensity image with both the foreground and the background. As a result of material characteristics, the translucent object yields systematic distortions in the depth map. We developed a new distance representation that interprets the depth distortion induced as a result of translucency. By analyzing ToF depth sensing principles, we constructed a distance model governed by the level of translucency, foreground depth, and background depth. Using an analysis-by-synthesis approach, we can recover the 3-D geometry of a translucent object from a pair of depth maps and their intensity images. Extensive evaluation and case studies demonstrate that our method is effective for modeling the nonlinear depth distortion due to translucency and for reconstruction of a 3-D translucent object.

Original languageEnglish
Article number7029010
Pages (from-to)841-854
Number of pages14
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume26
Issue number5
DOIs
Publication statusPublished - 2016 May 1

Fingerprint

Cameras
Refraction
Data acquisition
Geometry

All Science Journal Classification (ASJC) codes

  • Media Technology
  • Electrical and Electronic Engineering

Cite this

@article{0036e73cee6e41f58ea60746b3fb9e36,
title = "Recovering Translucent Objects Using a Single Time-of-Flight Depth Camera",
abstract = "Translucency introduces great challenges to 3-D acquisition because of complicated light behaviors such as refraction and transmittance. In this paper, we describe the development of a unified 3-D data acquisition framework that reconstructs translucent objects using a single commercial time-of-flight (ToF) camera. In our capture scenario, we record a depth map and intensity image of the scene twice using a static ToF camera; first, we capture the depth map and intensity image of an arbitrary background, and then we position the translucent foreground object and record a second depth map and intensity image with both the foreground and the background. As a result of material characteristics, the translucent object yields systematic distortions in the depth map. We developed a new distance representation that interprets the depth distortion induced as a result of translucency. By analyzing ToF depth sensing principles, we constructed a distance model governed by the level of translucency, foreground depth, and background depth. Using an analysis-by-synthesis approach, we can recover the 3-D geometry of a translucent object from a pair of depth maps and their intensity images. Extensive evaluation and case studies demonstrate that our method is effective for modeling the nonlinear depth distortion due to translucency and for reconstruction of a 3-D translucent object.",
author = "Hyunjung Shim and Seungkyu Lee",
year = "2016",
month = "5",
day = "1",
doi = "10.1109/TCSVT.2015.2397231",
language = "English",
volume = "26",
pages = "841--854",
journal = "IEEE Transactions on Circuits and Systems for Video Technology",
issn = "1051-8215",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "5",

}

Recovering Translucent Objects Using a Single Time-of-Flight Depth Camera. / Shim, Hyunjung; Lee, Seungkyu.

In: IEEE Transactions on Circuits and Systems for Video Technology, Vol. 26, No. 5, 7029010, 01.05.2016, p. 841-854.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Recovering Translucent Objects Using a Single Time-of-Flight Depth Camera

AU - Shim, Hyunjung

AU - Lee, Seungkyu

PY - 2016/5/1

Y1 - 2016/5/1

N2 - Translucency introduces great challenges to 3-D acquisition because of complicated light behaviors such as refraction and transmittance. In this paper, we describe the development of a unified 3-D data acquisition framework that reconstructs translucent objects using a single commercial time-of-flight (ToF) camera. In our capture scenario, we record a depth map and intensity image of the scene twice using a static ToF camera; first, we capture the depth map and intensity image of an arbitrary background, and then we position the translucent foreground object and record a second depth map and intensity image with both the foreground and the background. As a result of material characteristics, the translucent object yields systematic distortions in the depth map. We developed a new distance representation that interprets the depth distortion induced as a result of translucency. By analyzing ToF depth sensing principles, we constructed a distance model governed by the level of translucency, foreground depth, and background depth. Using an analysis-by-synthesis approach, we can recover the 3-D geometry of a translucent object from a pair of depth maps and their intensity images. Extensive evaluation and case studies demonstrate that our method is effective for modeling the nonlinear depth distortion due to translucency and for reconstruction of a 3-D translucent object.

AB - Translucency introduces great challenges to 3-D acquisition because of complicated light behaviors such as refraction and transmittance. In this paper, we describe the development of a unified 3-D data acquisition framework that reconstructs translucent objects using a single commercial time-of-flight (ToF) camera. In our capture scenario, we record a depth map and intensity image of the scene twice using a static ToF camera; first, we capture the depth map and intensity image of an arbitrary background, and then we position the translucent foreground object and record a second depth map and intensity image with both the foreground and the background. As a result of material characteristics, the translucent object yields systematic distortions in the depth map. We developed a new distance representation that interprets the depth distortion induced as a result of translucency. By analyzing ToF depth sensing principles, we constructed a distance model governed by the level of translucency, foreground depth, and background depth. Using an analysis-by-synthesis approach, we can recover the 3-D geometry of a translucent object from a pair of depth maps and their intensity images. Extensive evaluation and case studies demonstrate that our method is effective for modeling the nonlinear depth distortion due to translucency and for reconstruction of a 3-D translucent object.

UR - http://www.scopus.com/inward/record.url?scp=84969796266&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84969796266&partnerID=8YFLogxK

U2 - 10.1109/TCSVT.2015.2397231

DO - 10.1109/TCSVT.2015.2397231

M3 - Article

VL - 26

SP - 841

EP - 854

JO - IEEE Transactions on Circuits and Systems for Video Technology

JF - IEEE Transactions on Circuits and Systems for Video Technology

SN - 1051-8215

IS - 5

M1 - 7029010

ER -