Abstract
This paper proposes a method to autonomously extract stable visual landmarks from sensory data. Given a 2D occupancy map, a mobile robot first extracts vertical line features which are distinct and on vertical planar surfaces, because they are expected to be observed reliably firm various viewpoints. Since the feature information such as the position and the length includes uncertainties due to errors of vision and motion of the robot, the robot then reduces the uncertainty by matching the planar surface containing the features to the map. As a result, the robot obtains modeled stable visual landmarks from the extracted features. These processes are performed on-line in order to adapt to actual changes of lighting and scene depending on the robot's view. Experimental results in various scenes show the validity of the proposed method.
Original language | English |
---|---|
Pages (from-to) | 1188-1193 |
Number of pages | 6 |
Journal | Proceedings - IEEE International Conference on Robotics and Automation |
Volume | 2 |
DOIs | |
Publication status | Published - 2001 |
All Science Journal Classification (ASJC) codes
- Software
- Control and Systems Engineering
- Artificial Intelligence
- Electrical and Electronic Engineering