Abstract
We present a novel real-time framework for non-rigid 3D reconstruction that is robust to noise, camera poses, and large deformation from a single depth camera. KinectFusion has achieved high-quality 3D object reconstructions in real-time by implicitly representing an object's surface with a signed distance field (SDF) representation from a single depth camera. Many studies for incremental reconstruction have been presented since then, with the surface estimation improving over time. Previous works primarily focused on improving conventional SDF matching and deformation schemes. In contrast to these works, the proposed framework tackles the problem of temporal inconsistency caused by SDF approximation and fusion to manipulate SDFs and reconstruct a target more accurately over time. In our reconstruction pipeline, we introduce a refinement evolution method, where an erroneous SDF from a depth sensor is recovered more accurately in a few iterations by propagating erroneous SDF values from the surface. Reliable gradients of refined SDFs enable more accurate non-rigid tracking of a target object. Furthermore, we propose a level-set evolution for SDF fusion, enabling SDFs to be manipulated stably in the reconstruction pipeline over time. The proposed methods are fully parallelizable and can be executed in real-time. Qualitative and quantitative evaluations show that incorporating the refinement and fusion methods into the reconstruction pipeline improves 3D reconstruction accuracy and temporal reliability by avoiding cumulative errors over time. Evaluation results show that our pipeline results in more accurate reconstruction that is robust to noise and large motions, as well as outperforms previous state-of-the-art reconstruction methods.
Original language | English |
---|---|
Pages (from-to) | 2211-2225 |
Number of pages | 15 |
Journal | IEEE Transactions on Circuits and Systems for Video Technology |
Volume | 32 |
Issue number | 4 |
DOIs | |
Publication status | Published - 2022 Apr 1 |
Bibliographical note
Funding Information:This work was supported in part by the National Research Foundation of Korea (NRF) Grant funded by the Korean Government (MSIT) under Grant 2020R1A2C3011697 and in part by the Yonsei University Research Fund of 2021 under Grant 2021-22-0001
Publisher Copyright:
© 1991-2012 IEEE.
All Science Journal Classification (ASJC) codes
- Media Technology
- Electrical and Electronic Engineering