Adaptive monocular visual-inertial SLAM for real-time augmented reality applications in mobile devices

Jin Chun Piao, Shin-Dug Kim

Research output: Contribution to journalArticle

3 Citations (Scopus)

Abstract

Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

Original languageEnglish
Article number2567
JournalSensors (Switzerland)
Volume17
Issue number11
DOIs
Publication statusPublished - 2017 Nov 7

Fingerprint

Mobile Applications
Augmented reality
Mobile devices
Equipment and Supplies
Optical flows
Cameras
cameras
autonomous navigation
Units of measurement
root-mean-square errors
sensors
Sensors
computer vision
robots
Mean square error
Computer vision
emerging
Navigation
modules
Trajectories

All Science Journal Classification (ASJC) codes

  • Analytical Chemistry
  • Atomic and Molecular Physics, and Optics
  • Biochemistry
  • Instrumentation
  • Electrical and Electronic Engineering

Cite this

@article{717805eb83514f7b91b63010e207bcb4,
title = "Adaptive monocular visual-inertial SLAM for real-time augmented reality applications in mobile devices",
abstract = "Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8{\%}, 12.9{\%}, and 18.8{\%} when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.",
author = "Piao, {Jin Chun} and Shin-Dug Kim",
year = "2017",
month = "11",
day = "7",
doi = "10.3390/s17112567",
language = "English",
volume = "17",
journal = "Sensors",
issn = "1424-3210",
publisher = "Multidisciplinary Digital Publishing Institute (MDPI)",
number = "11",

}

Adaptive monocular visual-inertial SLAM for real-time augmented reality applications in mobile devices. / Piao, Jin Chun; Kim, Shin-Dug.

In: Sensors (Switzerland), Vol. 17, No. 11, 2567, 07.11.2017.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Adaptive monocular visual-inertial SLAM for real-time augmented reality applications in mobile devices

AU - Piao, Jin Chun

AU - Kim, Shin-Dug

PY - 2017/11/7

Y1 - 2017/11/7

N2 - Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

AB - Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

UR - http://www.scopus.com/inward/record.url?scp=85033554500&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85033554500&partnerID=8YFLogxK

U2 - 10.3390/s17112567

DO - 10.3390/s17112567

M3 - Article

AN - SCOPUS:85033554500

VL - 17

JO - Sensors

JF - Sensors

SN - 1424-3210

IS - 11

M1 - 2567

ER -