Automatic extraction of visual landmarks for a mobile robot under uncertainty of vision and motion

Inhyuk Moon, Jun Miura, Yoshiaki Shirai

Research output: Contribution to journalArticlepeer-review

8 Citations (Scopus)

Abstract

This paper proposes a method to autonomously extract stable visual landmarks from sensory data. Given a 2D occupancy map, a mobile robot first extracts vertical line features which are distinct and on vertical planar surfaces, because they are expected to be observed reliably firm various viewpoints. Since the feature information such as the position and the length includes uncertainties due to errors of vision and motion of the robot, the robot then reduces the uncertainty by matching the planar surface containing the features to the map. As a result, the robot obtains modeled stable visual landmarks from the extracted features. These processes are performed on-line in order to adapt to actual changes of lighting and scene depending on the robot's view. Experimental results in various scenes show the validity of the proposed method.

Original languageEnglish
Pages (from-to)1188-1193
Number of pages6
JournalProceedings - IEEE International Conference on Robotics and Automation
Volume2
DOIs
Publication statusPublished - 2001

All Science Journal Classification (ASJC) codes

  • Software
  • Control and Systems Engineering
  • Artificial Intelligence
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Automatic extraction of visual landmarks for a mobile robot under uncertainty of vision and motion'. Together they form a unique fingerprint.

Cite this