Space-time hole filling with random walks in view extrapolation for 3D video

Sunghwan Choi, Bumsub Ham, Kwanghoon Sohn

Research output: Contribution to journalArticle

29 Citations (Scopus)

Abstract

In this paper, a space-time hole filling approach is presented to deal with a disocclusion when a view is synthesized for the 3D video. The problem becomes even more complicated when the view is extrapolated from a single view, since the hole is large and has no stereo depth cues. Although many techniques have been developed to address this problem, most of them focus only on view interpolation. We propose a space-time joint filling method for color and depth videos in view extrapolation. For proper texture and depth to be sampled in the following hole filling process, the background of a scene is automatically segmented by the random walker segmentation in conjunction with the hole formation process. Then, the patch candidate selection process is formulated as a labeling problem, which can be solved with random walks. The patch candidates that best describe the hole region are dynamically selected in the space-time domain, and the hole is filled with the optimal patch for ensuring both spatial and temporal coherence. The experimental results show that the proposed method is superior to state-of-the-art methods and provides both spatially and temporally consistent results with significantly reduced flicker artifacts.

Original languageEnglish
Article number6476015
Pages (from-to)2429-2441
Number of pages13
JournalIEEE Transactions on Image Processing
Volume22
Issue number6
DOIs
Publication statusPublished - 2013 May 2

Fingerprint

Extrapolation
Labeling
Interpolation
Textures
Color

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Graphics and Computer-Aided Design

Cite this

@article{6192a5710ea748c2a6593be6b9a8c959,
title = "Space-time hole filling with random walks in view extrapolation for 3D video",
abstract = "In this paper, a space-time hole filling approach is presented to deal with a disocclusion when a view is synthesized for the 3D video. The problem becomes even more complicated when the view is extrapolated from a single view, since the hole is large and has no stereo depth cues. Although many techniques have been developed to address this problem, most of them focus only on view interpolation. We propose a space-time joint filling method for color and depth videos in view extrapolation. For proper texture and depth to be sampled in the following hole filling process, the background of a scene is automatically segmented by the random walker segmentation in conjunction with the hole formation process. Then, the patch candidate selection process is formulated as a labeling problem, which can be solved with random walks. The patch candidates that best describe the hole region are dynamically selected in the space-time domain, and the hole is filled with the optimal patch for ensuring both spatial and temporal coherence. The experimental results show that the proposed method is superior to state-of-the-art methods and provides both spatially and temporally consistent results with significantly reduced flicker artifacts.",
author = "Sunghwan Choi and Bumsub Ham and Kwanghoon Sohn",
year = "2013",
month = "5",
day = "2",
doi = "10.1109/TIP.2013.2251646",
language = "English",
volume = "22",
pages = "2429--2441",
journal = "IEEE Transactions on Image Processing",
issn = "1057-7149",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "6",

}

Space-time hole filling with random walks in view extrapolation for 3D video. / Choi, Sunghwan; Ham, Bumsub; Sohn, Kwanghoon.

In: IEEE Transactions on Image Processing, Vol. 22, No. 6, 6476015, 02.05.2013, p. 2429-2441.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Space-time hole filling with random walks in view extrapolation for 3D video

AU - Choi, Sunghwan

AU - Ham, Bumsub

AU - Sohn, Kwanghoon

PY - 2013/5/2

Y1 - 2013/5/2

N2 - In this paper, a space-time hole filling approach is presented to deal with a disocclusion when a view is synthesized for the 3D video. The problem becomes even more complicated when the view is extrapolated from a single view, since the hole is large and has no stereo depth cues. Although many techniques have been developed to address this problem, most of them focus only on view interpolation. We propose a space-time joint filling method for color and depth videos in view extrapolation. For proper texture and depth to be sampled in the following hole filling process, the background of a scene is automatically segmented by the random walker segmentation in conjunction with the hole formation process. Then, the patch candidate selection process is formulated as a labeling problem, which can be solved with random walks. The patch candidates that best describe the hole region are dynamically selected in the space-time domain, and the hole is filled with the optimal patch for ensuring both spatial and temporal coherence. The experimental results show that the proposed method is superior to state-of-the-art methods and provides both spatially and temporally consistent results with significantly reduced flicker artifacts.

AB - In this paper, a space-time hole filling approach is presented to deal with a disocclusion when a view is synthesized for the 3D video. The problem becomes even more complicated when the view is extrapolated from a single view, since the hole is large and has no stereo depth cues. Although many techniques have been developed to address this problem, most of them focus only on view interpolation. We propose a space-time joint filling method for color and depth videos in view extrapolation. For proper texture and depth to be sampled in the following hole filling process, the background of a scene is automatically segmented by the random walker segmentation in conjunction with the hole formation process. Then, the patch candidate selection process is formulated as a labeling problem, which can be solved with random walks. The patch candidates that best describe the hole region are dynamically selected in the space-time domain, and the hole is filled with the optimal patch for ensuring both spatial and temporal coherence. The experimental results show that the proposed method is superior to state-of-the-art methods and provides both spatially and temporally consistent results with significantly reduced flicker artifacts.

UR - http://www.scopus.com/inward/record.url?scp=84876786740&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84876786740&partnerID=8YFLogxK

U2 - 10.1109/TIP.2013.2251646

DO - 10.1109/TIP.2013.2251646

M3 - Article

AN - SCOPUS:84876786740

VL - 22

SP - 2429

EP - 2441

JO - IEEE Transactions on Image Processing

JF - IEEE Transactions on Image Processing

SN - 1057-7149

IS - 6

M1 - 6476015

ER -