Probability-based rendering for view synthesis

Bumsub Ham, Dongbo Min, Changjae Oh, Minh N. Do, Kwanghoon Sohn

Research output: Contribution to journalArticle

26 Citations (Scopus)

Abstract

In this paper, a probability-based rendering (PBR) method is described for reconstructing an intermediate view with a steady-state matching probability (SSMP) density function. Conventionally, given multiple reference images, the intermediate view is synthesized via the depth image-based rendering technique in which geometric information (e.g., depth) is explicitly leveraged, thus leading to serious rendering artifacts on the synthesized view even with small depth errors. We address this problem by formulating the rendering process as an image fusion in which the textures of all probable matching points are adaptively blended with the SSMP representing the likelihood that points among the input reference images are matched. The PBR hence becomes more robust against depth estimation errors than existing view synthesis approaches. The MP in the steady-state, SSMP, is inferred for each pixel via the random walk with restart (RWR). The RWR always guarantees visually consistent MP, as opposed to conventional optimization schemes (e.g., diffusion or filtering-based approaches), the accuracy of which heavily depends on parameters used. Experimental results demonstrate the superiority of the PBR over the existing view synthesis approaches both qualitatively and quantitatively. Especially, the PBR is effective in suppressing flicker artifacts of virtual video rendering although no temporal aspect is considered. Moreover, it is shown that the depth map itself calculated from our RWR-based method (by simply choosing the most probable matching point) is also comparable with that of the state-of-the-art local stereo matching methods.

Original languageEnglish
Article number6690212
Pages (from-to)870-884
Number of pages15
JournalIEEE Transactions on Image Processing
Volume23
Issue number2
DOIs
Publication statusPublished - 2014 Feb 1

Fingerprint

Image fusion
Error analysis
Probability density function
Textures
Pixels

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Graphics and Computer-Aided Design

Cite this

Ham, Bumsub ; Min, Dongbo ; Oh, Changjae ; Do, Minh N. ; Sohn, Kwanghoon. / Probability-based rendering for view synthesis. In: IEEE Transactions on Image Processing. 2014 ; Vol. 23, No. 2. pp. 870-884.
@article{a9d3533bbaff42cc9fe967e63244d684,
title = "Probability-based rendering for view synthesis",
abstract = "In this paper, a probability-based rendering (PBR) method is described for reconstructing an intermediate view with a steady-state matching probability (SSMP) density function. Conventionally, given multiple reference images, the intermediate view is synthesized via the depth image-based rendering technique in which geometric information (e.g., depth) is explicitly leveraged, thus leading to serious rendering artifacts on the synthesized view even with small depth errors. We address this problem by formulating the rendering process as an image fusion in which the textures of all probable matching points are adaptively blended with the SSMP representing the likelihood that points among the input reference images are matched. The PBR hence becomes more robust against depth estimation errors than existing view synthesis approaches. The MP in the steady-state, SSMP, is inferred for each pixel via the random walk with restart (RWR). The RWR always guarantees visually consistent MP, as opposed to conventional optimization schemes (e.g., diffusion or filtering-based approaches), the accuracy of which heavily depends on parameters used. Experimental results demonstrate the superiority of the PBR over the existing view synthesis approaches both qualitatively and quantitatively. Especially, the PBR is effective in suppressing flicker artifacts of virtual video rendering although no temporal aspect is considered. Moreover, it is shown that the depth map itself calculated from our RWR-based method (by simply choosing the most probable matching point) is also comparable with that of the state-of-the-art local stereo matching methods.",
author = "Bumsub Ham and Dongbo Min and Changjae Oh and Do, {Minh N.} and Kwanghoon Sohn",
year = "2014",
month = "2",
day = "1",
doi = "10.1109/TIP.2013.2295716",
language = "English",
volume = "23",
pages = "870--884",
journal = "IEEE Transactions on Image Processing",
issn = "1057-7149",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "2",

}

Probability-based rendering for view synthesis. / Ham, Bumsub; Min, Dongbo; Oh, Changjae; Do, Minh N.; Sohn, Kwanghoon.

In: IEEE Transactions on Image Processing, Vol. 23, No. 2, 6690212, 01.02.2014, p. 870-884.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Probability-based rendering for view synthesis

AU - Ham, Bumsub

AU - Min, Dongbo

AU - Oh, Changjae

AU - Do, Minh N.

AU - Sohn, Kwanghoon

PY - 2014/2/1

Y1 - 2014/2/1

N2 - In this paper, a probability-based rendering (PBR) method is described for reconstructing an intermediate view with a steady-state matching probability (SSMP) density function. Conventionally, given multiple reference images, the intermediate view is synthesized via the depth image-based rendering technique in which geometric information (e.g., depth) is explicitly leveraged, thus leading to serious rendering artifacts on the synthesized view even with small depth errors. We address this problem by formulating the rendering process as an image fusion in which the textures of all probable matching points are adaptively blended with the SSMP representing the likelihood that points among the input reference images are matched. The PBR hence becomes more robust against depth estimation errors than existing view synthesis approaches. The MP in the steady-state, SSMP, is inferred for each pixel via the random walk with restart (RWR). The RWR always guarantees visually consistent MP, as opposed to conventional optimization schemes (e.g., diffusion or filtering-based approaches), the accuracy of which heavily depends on parameters used. Experimental results demonstrate the superiority of the PBR over the existing view synthesis approaches both qualitatively and quantitatively. Especially, the PBR is effective in suppressing flicker artifacts of virtual video rendering although no temporal aspect is considered. Moreover, it is shown that the depth map itself calculated from our RWR-based method (by simply choosing the most probable matching point) is also comparable with that of the state-of-the-art local stereo matching methods.

AB - In this paper, a probability-based rendering (PBR) method is described for reconstructing an intermediate view with a steady-state matching probability (SSMP) density function. Conventionally, given multiple reference images, the intermediate view is synthesized via the depth image-based rendering technique in which geometric information (e.g., depth) is explicitly leveraged, thus leading to serious rendering artifacts on the synthesized view even with small depth errors. We address this problem by formulating the rendering process as an image fusion in which the textures of all probable matching points are adaptively blended with the SSMP representing the likelihood that points among the input reference images are matched. The PBR hence becomes more robust against depth estimation errors than existing view synthesis approaches. The MP in the steady-state, SSMP, is inferred for each pixel via the random walk with restart (RWR). The RWR always guarantees visually consistent MP, as opposed to conventional optimization schemes (e.g., diffusion or filtering-based approaches), the accuracy of which heavily depends on parameters used. Experimental results demonstrate the superiority of the PBR over the existing view synthesis approaches both qualitatively and quantitatively. Especially, the PBR is effective in suppressing flicker artifacts of virtual video rendering although no temporal aspect is considered. Moreover, it is shown that the depth map itself calculated from our RWR-based method (by simply choosing the most probable matching point) is also comparable with that of the state-of-the-art local stereo matching methods.

UR - http://www.scopus.com/inward/record.url?scp=84892413329&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84892413329&partnerID=8YFLogxK

U2 - 10.1109/TIP.2013.2295716

DO - 10.1109/TIP.2013.2295716

M3 - Article

AN - SCOPUS:84892413329

VL - 23

SP - 870

EP - 884

JO - IEEE Transactions on Image Processing

JF - IEEE Transactions on Image Processing

SN - 1057-7149

IS - 2

M1 - 6690212

ER -