Temporally Consistent Depth Prediction with Flow-Guided Memory Units

Chanho Eom, Hyunjong Park, Bumsub Ham

Research output: Contribution to journalArticlepeer-review

Abstract

Predicting depth from a monocular video sequence is an important task for autonomous driving. Although it has advanced considerably in the past few years, recent methods based on convolutional neural networks (CNNs) discard temporal coherence in the video sequence and estimate depth independently for each frame, which often leads to undesired inconsistent results over time. To address this problem, we propose to memorize temporal consistency in the video sequence, and leverage it for the task of depth prediction. To this end, we introduce a two-stream CNN with a flow-guided memory module, where each stream encodes visual and temporal features, respectively. The memory module, implemented using convolutional gated recurrent units (ConvGRUs), inputs visual and temporal features sequentially together with optical flow tailored to our task. It memorizes trajectories of individual features selectively and propagates spatial information over time, enforcing a long-term temporal consistency to prediction results. We evaluate our method on the KITTI benchmark dataset in terms of depth prediction accuracy, temporal consistency and runtime, and achieve a new state of the art. We also provide an extensive experimental analysis, clearly demonstrating the effectiveness of our approach to memorizing temporal consistency for depth prediction.

Original languageEnglish
Article number8848860
Pages (from-to)4626-4636
Number of pages11
JournalIEEE Transactions on Intelligent Transportation Systems
Volume21
Issue number11
DOIs
Publication statusPublished - 2020 Nov

Bibliographical note

Funding Information:
Manuscript received March 29, 2019; revised June 19, 2019; accepted September 12, 2019. Date of publication September 25, 2019; date of current version October 30, 2020. This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (2016-0-00197, Development of the high-precision natural 3D view generation technology using smart-car multi sensors and deep learning). The Associate Editor for this article was S. S. Nedevschi. (Corresponding author: Bumsub Ham.) The authors are with the School of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, South Korea (e-mail: cheom@yonsei.ac.kr; hyunpark@yonsei.ac.kr; bumsub.ham@yonsei.ac.kr). Digital Object Identifier 10.1109/TITS.2019.2942096

Publisher Copyright:
© 2000-2011 IEEE.

All Science Journal Classification (ASJC) codes

  • Automotive Engineering
  • Mechanical Engineering
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Temporally Consistent Depth Prediction with Flow-Guided Memory Units'. Together they form a unique fingerprint.

Cite this