A sensor-based vision localization system is one of the most essential technologies in computer vision applications like an autonomous navigation, surveillance, and many others. Conventionally, sensor-based vision localization systems have three inherent limitations, These include, sensitivity to illumination variations, viewpoint variations, and high computational complexity. To overcome these problems, we propose a robust image matching method to provide invariance to the illumination and viewpoint variations by focusing on how to solve these limitations and incorporate this scheme into the vision-based localization system. Based on the proposed image matching method, we design a robust localization system that provides satisfactory localization performance with low computational complexity. Specifically, in order to solve the problem of illumination and viewpoint, we extract a key point using a virtual view from a query image and the descriptor based on the local average patch difference, similar to HC-LBP. Moreover, we propose a key frame selection method and a simple tree scheme for fast image search. Experimental results show that the proposed localization system is four times faster than existing systems, and exhibits better matching performance compared to existing algorithms in challenging environments with difficult illumination and viewpoint conditions.
Bibliographical noteFunding Information:
This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIP) (NRF-2013R1A2A2A01068338).
All Science Journal Classification (ASJC) codes
- Computer Science Applications
- Artificial Intelligence