Ground vehicle navigation in harsh urban conditions by integrating inertial navigation system, global positioning system, odometer and vision data

S. B. Kim, J. C. Bazin, H. K. Lee, K. H. Choi, S. Y. Park

Research output: Contribution to journalArticle

56 Citations (Scopus)

Abstract

Combining GPS/INS/odometer data has been considered one of the most attractive methodologies for ground vehicle navigation. In the case of long GPS signal blockages inherent to complex urban environments, however, the accuracy of this approach is largely deteriorated. To overcome this limitation, this study proposes a novel ground vehicle navigation system that combines INS, odometer and omnidirectional vision sensor. Compared to traditional cameras, omnidirectional vision sensors can acquire much more information from the environment thanks to their wide field of view. The proposed system automatically extracts and tracks vanishing points in omnidirectional images to estimate the vehicle rotation. This scheme provides robust navigation information: specifically by combining the advantages of vision, odometer and INS, we estimate the attitude without error accumulation and at a fast running rate. The accurate rotational information is fed back into a Kalman filter to improve the quality of the INS bridging in harsh urban conditions. Extensive experiments have demonstrated that the proposed approach significantly reduces the accumulation of position, velocity and attitude errors during simulated GPS outages. Specifically, the position accuracy is improved by over 30% during simulated GPS outages.

Original languageEnglish
Pages (from-to)814-823
Number of pages10
JournalIET Radar, Sonar and Navigation
Volume5
Issue number8
DOIs
Publication statusPublished - 2011 Oct 1

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'Ground vehicle navigation in harsh urban conditions by integrating inertial navigation system, global positioning system, odometer and vision data'. Together they form a unique fingerprint.

  • Cite this