Many video understanding tasks work in the offline setting by assuming that the input video is given from the start to the end. However, many real-world problems require the online setting, making a decision immediately using only the current and the past frames of videos such as in autonomous driving and surveillance systems. In this paper, we present a novel solution for online action detection by using a simple yet effective RNN-based networks called the Future Anticipation and Temporally Smoothing network (FATSnet). The proposed network consists of a module for anticipating the future that can be trained in an unsupervised manner with the cycle-consistency loss, and another component for aggregating the past and the future for temporally smooth frame-by-frame predictions. We also propose a solution to relieve the performance loss when running RNN-based models on very long sequences. Evaluations on TVSeries, THUMOS'14, and BBDB show that our method achieve the state-of-the-art performances compared to the previous works on online action detection.
|Publication status||Published - 2021 Aug|
Bibliographical noteFunding Information:
This work was partially supported by the ICT R&D program of MSIT/IITP (2017-0-01772,Development of QA systems for Video Story Understanding to pass the Video Turing Test). Also, this work wassupported in part by the Institute of Information and Communications Technology Planning and Evaluation (IITP)Grant funded by the Korean Government ( MSIT ), Artificial Intelligence Graduate School Program, Yonsei University,under Grant 2020-0-01361 .
© 2021 Elsevier Ltd
All Science Journal Classification (ASJC) codes
- Signal Processing
- Computer Vision and Pattern Recognition
- Artificial Intelligence