Visual homing navigation has been a challenging issue in indoor localization and navigation. Inspired by insect navigation, the snapshot model was introduced for homing navigation, where a pair of snapshots at the current location and at the nest are compared to guide the homing direction. We investigate Haar-like features in vision to extract visual cues, based on the snapshot model. The Haar-like features consist of masks randomly generated over the snapshot image at the home location, and later, their matching scores at the snapshot available at the current location are calculated for the correspondence measure. We draw landmark vectors using the correspondence measure of Haar-like features at their angular positions. Interestingly, a collection of Haar-like features provide visual characteristics to reflect a pair of snapshot images, which can determine the homing direction. In this paper, we propose two types of homing methods based on the image difference using Haar-like features, the Haar-like landmark vector model and the Haar-like image distance model. We demonstrate the effectiveness of the methods in several environments.
Bibliographical noteFunding Information:
This work was supported by the National Research Foundation of Korea through the Korean Government (MSIT) under Grant 2017R1A2B4011455.
All Science Journal Classification (ASJC) codes
- Computer Science(all)
- Materials Science(all)