Automatic extraction of visual landmarks for a mobile robot under uncertainty of vision and motion

il Moon, Jun Miura, Yoshiaki Shirai

Research output: Contribution to journalArticle

8 Citations (Scopus)

Abstract

This paper proposes a method to autonomously extract stable visual landmarks from sensory data. Given a 2D occupancy map, a mobile robot first extracts vertical line features which are distinct and on vertical planar surfaces, because they are expected to be observed reliably firm various viewpoints. Since the feature information such as the position and the length includes uncertainties due to errors of vision and motion of the robot, the robot then reduces the uncertainty by matching the planar surface containing the features to the map. As a result, the robot obtains modeled stable visual landmarks from the extracted features. These processes are performed on-line in order to adapt to actual changes of lighting and scene depending on the robot's view. Experimental results in various scenes show the validity of the proposed method.

Original languageEnglish
Pages (from-to)1188-1193
Number of pages6
JournalProceedings - IEEE International Conference on Robotics and Automation
Volume2
DOIs
Publication statusPublished - 2001 Jan 1

Fingerprint

Mobile robots
Robots
Lighting
Uncertainty

All Science Journal Classification (ASJC) codes

  • Software
  • Control and Systems Engineering
  • Artificial Intelligence
  • Electrical and Electronic Engineering

Cite this

@article{515821c1d885471fbdd3026bad2ce2ce,
title = "Automatic extraction of visual landmarks for a mobile robot under uncertainty of vision and motion",
abstract = "This paper proposes a method to autonomously extract stable visual landmarks from sensory data. Given a 2D occupancy map, a mobile robot first extracts vertical line features which are distinct and on vertical planar surfaces, because they are expected to be observed reliably firm various viewpoints. Since the feature information such as the position and the length includes uncertainties due to errors of vision and motion of the robot, the robot then reduces the uncertainty by matching the planar surface containing the features to the map. As a result, the robot obtains modeled stable visual landmarks from the extracted features. These processes are performed on-line in order to adapt to actual changes of lighting and scene depending on the robot's view. Experimental results in various scenes show the validity of the proposed method.",
author = "il Moon and Jun Miura and Yoshiaki Shirai",
year = "2001",
month = "1",
day = "1",
doi = "10.1109/ROBOT.2001.932772",
language = "English",
volume = "2",
pages = "1188--1193",
journal = "Proceedings - IEEE International Conference on Robotics and Automation",
issn = "1050-4729",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

Automatic extraction of visual landmarks for a mobile robot under uncertainty of vision and motion. / Moon, il; Miura, Jun; Shirai, Yoshiaki.

In: Proceedings - IEEE International Conference on Robotics and Automation, Vol. 2, 01.01.2001, p. 1188-1193.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Automatic extraction of visual landmarks for a mobile robot under uncertainty of vision and motion

AU - Moon, il

AU - Miura, Jun

AU - Shirai, Yoshiaki

PY - 2001/1/1

Y1 - 2001/1/1

N2 - This paper proposes a method to autonomously extract stable visual landmarks from sensory data. Given a 2D occupancy map, a mobile robot first extracts vertical line features which are distinct and on vertical planar surfaces, because they are expected to be observed reliably firm various viewpoints. Since the feature information such as the position and the length includes uncertainties due to errors of vision and motion of the robot, the robot then reduces the uncertainty by matching the planar surface containing the features to the map. As a result, the robot obtains modeled stable visual landmarks from the extracted features. These processes are performed on-line in order to adapt to actual changes of lighting and scene depending on the robot's view. Experimental results in various scenes show the validity of the proposed method.

AB - This paper proposes a method to autonomously extract stable visual landmarks from sensory data. Given a 2D occupancy map, a mobile robot first extracts vertical line features which are distinct and on vertical planar surfaces, because they are expected to be observed reliably firm various viewpoints. Since the feature information such as the position and the length includes uncertainties due to errors of vision and motion of the robot, the robot then reduces the uncertainty by matching the planar surface containing the features to the map. As a result, the robot obtains modeled stable visual landmarks from the extracted features. These processes are performed on-line in order to adapt to actual changes of lighting and scene depending on the robot's view. Experimental results in various scenes show the validity of the proposed method.

UR - http://www.scopus.com/inward/record.url?scp=0034860532&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0034860532&partnerID=8YFLogxK

U2 - 10.1109/ROBOT.2001.932772

DO - 10.1109/ROBOT.2001.932772

M3 - Article

AN - SCOPUS:0034860532

VL - 2

SP - 1188

EP - 1193

JO - Proceedings - IEEE International Conference on Robotics and Automation

JF - Proceedings - IEEE International Conference on Robotics and Automation

SN - 1050-4729

ER -