Abstract
Video frame interpolation aims to synthesize nonexistent frames in-between the original frames. While significant advances have been made from the recent deep convolutional neural networks, the quality of interpolation is often reduced due to large object motion or occlusion. In this work, we propose a video frame interpolation method which explicitly detects the occlusion by exploring the depth information. Specifically, we develop a depth-aware flow projection layer to synthesize intermediate flows that preferably sample closer objects than farther ones. In addition, we learn hierarchical features to gather contextual information from neighboring pixels. The proposed model then warps the input frames, depth maps, and contextual features based on the optical flow and local interpolation kernels for synthesizing the output frame. Our model is compact, efficient, and fully differentiable. Quantitative and qualitative results demonstrate that the proposed model performs favorably against state-of-the-art frame interpolation methods on a wide variety of datasets. The source code and pre-trained model are available at https://github.com/baowenbo/DAIN.
Original language | English |
---|---|
Title of host publication | Proceedings - 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019 |
Publisher | IEEE Computer Society |
Pages | 3698-3707 |
Number of pages | 10 |
ISBN (Electronic) | 9781728132938 |
DOIs | |
Publication status | Published - 2019 Jun |
Event | 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019 - Long Beach, United States Duration: 2019 Jun 16 → 2019 Jun 20 |
Publication series
Name | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition |
---|---|
Volume | 2019-June |
ISSN (Print) | 1063-6919 |
Conference
Conference | 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019 |
---|---|
Country/Territory | United States |
City | Long Beach |
Period | 19/6/16 → 19/6/20 |
Bibliographical note
Funding Information:Acknowledgment. This work was supported in part by National Key Research and Development Program of China (2016YFB1001003), NSFC (61771306), Natural Science Foundation of Shanghai (18ZR1418100), Chinese National Key S&T Special Program (2013ZX01033001-002-002), Shanghai Key Laboratory of Digital Media Processing and Transmissions (STCSM 18DZ2270700 and 18DZ1112300). It was also supported in part by NSF Career Grant (1149783) and gifts from Adobe, Verisk, and NEC.
Publisher Copyright:
© 2019 IEEE.
All Science Journal Classification (ASJC) codes
- Software
- Computer Vision and Pattern Recognition