A bayesian network framework for vision based semantic scene understanding

Seung Bin Im, Keum Sung Hwang, Sung Bae Cho

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

For a robot to understand a scene, we have to infer and extract meaningful information from vision sensor data. Since scene understanding consists in recognizing several visual contexts, we have to extract these contextual cues and understand their relationships. However, context extraction from visual information is difficult due to uncertain information in variable environments, imperfect nature of the feature extraction methods and high computational complexity of reasoning from the complex relationship. In order to manage the uncertainties effectively, in this paper, we adopted Bayesian probabilistic approach, and proposed a Bayesian network framework that synthesizes the low level features and the high level semantic cues. It contains how to develop and utilize an integrated Bayesian network model. In the experimental results of two applications, the efficacy of the proposed framework is shown.

Original languageEnglish
Title of host publication16th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN
Pages839-844
Number of pages6
DOIs
Publication statusPublished - 2007 Dec 1
Event16th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN - Jeju, Korea, Republic of
Duration: 2007 Aug 262007 Aug 29

Publication series

NameProceedings - IEEE International Workshop on Robot and Human Interactive Communication

Other

Other16th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN
CountryKorea, Republic of
CityJeju
Period07/8/2607/8/29

Fingerprint

Bayesian networks
Semantics
Feature extraction
Computational complexity
Robots
Sensors
Uncertainty

All Science Journal Classification (ASJC) codes

  • Engineering(all)

Cite this

Im, S. B., Hwang, K. S., & Cho, S. B. (2007). A bayesian network framework for vision based semantic scene understanding. In 16th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN (pp. 839-844). [4415201] (Proceedings - IEEE International Workshop on Robot and Human Interactive Communication). https://doi.org/10.1109/ROMAN.2007.4415201
Im, Seung Bin ; Hwang, Keum Sung ; Cho, Sung Bae. / A bayesian network framework for vision based semantic scene understanding. 16th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN. 2007. pp. 839-844 (Proceedings - IEEE International Workshop on Robot and Human Interactive Communication).
@inproceedings{66b3ce5b34284dc7ac6f5cd0b231ebae,
title = "A bayesian network framework for vision based semantic scene understanding",
abstract = "For a robot to understand a scene, we have to infer and extract meaningful information from vision sensor data. Since scene understanding consists in recognizing several visual contexts, we have to extract these contextual cues and understand their relationships. However, context extraction from visual information is difficult due to uncertain information in variable environments, imperfect nature of the feature extraction methods and high computational complexity of reasoning from the complex relationship. In order to manage the uncertainties effectively, in this paper, we adopted Bayesian probabilistic approach, and proposed a Bayesian network framework that synthesizes the low level features and the high level semantic cues. It contains how to develop and utilize an integrated Bayesian network model. In the experimental results of two applications, the efficacy of the proposed framework is shown.",
author = "Im, {Seung Bin} and Hwang, {Keum Sung} and Cho, {Sung Bae}",
year = "2007",
month = "12",
day = "1",
doi = "10.1109/ROMAN.2007.4415201",
language = "English",
isbn = "1424416345",
series = "Proceedings - IEEE International Workshop on Robot and Human Interactive Communication",
pages = "839--844",
booktitle = "16th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN",

}

Im, SB, Hwang, KS & Cho, SB 2007, A bayesian network framework for vision based semantic scene understanding. in 16th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN., 4415201, Proceedings - IEEE International Workshop on Robot and Human Interactive Communication, pp. 839-844, 16th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN, Jeju, Korea, Republic of, 07/8/26. https://doi.org/10.1109/ROMAN.2007.4415201

A bayesian network framework for vision based semantic scene understanding. / Im, Seung Bin; Hwang, Keum Sung; Cho, Sung Bae.

16th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN. 2007. p. 839-844 4415201 (Proceedings - IEEE International Workshop on Robot and Human Interactive Communication).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - A bayesian network framework for vision based semantic scene understanding

AU - Im, Seung Bin

AU - Hwang, Keum Sung

AU - Cho, Sung Bae

PY - 2007/12/1

Y1 - 2007/12/1

N2 - For a robot to understand a scene, we have to infer and extract meaningful information from vision sensor data. Since scene understanding consists in recognizing several visual contexts, we have to extract these contextual cues and understand their relationships. However, context extraction from visual information is difficult due to uncertain information in variable environments, imperfect nature of the feature extraction methods and high computational complexity of reasoning from the complex relationship. In order to manage the uncertainties effectively, in this paper, we adopted Bayesian probabilistic approach, and proposed a Bayesian network framework that synthesizes the low level features and the high level semantic cues. It contains how to develop and utilize an integrated Bayesian network model. In the experimental results of two applications, the efficacy of the proposed framework is shown.

AB - For a robot to understand a scene, we have to infer and extract meaningful information from vision sensor data. Since scene understanding consists in recognizing several visual contexts, we have to extract these contextual cues and understand their relationships. However, context extraction from visual information is difficult due to uncertain information in variable environments, imperfect nature of the feature extraction methods and high computational complexity of reasoning from the complex relationship. In order to manage the uncertainties effectively, in this paper, we adopted Bayesian probabilistic approach, and proposed a Bayesian network framework that synthesizes the low level features and the high level semantic cues. It contains how to develop and utilize an integrated Bayesian network model. In the experimental results of two applications, the efficacy of the proposed framework is shown.

UR - http://www.scopus.com/inward/record.url?scp=48749098866&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=48749098866&partnerID=8YFLogxK

U2 - 10.1109/ROMAN.2007.4415201

DO - 10.1109/ROMAN.2007.4415201

M3 - Conference contribution

AN - SCOPUS:48749098866

SN - 1424416345

SN - 9781424416349

T3 - Proceedings - IEEE International Workshop on Robot and Human Interactive Communication

SP - 839

EP - 844

BT - 16th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN

ER -

Im SB, Hwang KS, Cho SB. A bayesian network framework for vision based semantic scene understanding. In 16th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN. 2007. p. 839-844. 4415201. (Proceedings - IEEE International Workshop on Robot and Human Interactive Communication). https://doi.org/10.1109/ROMAN.2007.4415201