Video deblurring is a challenging problem as the blur is complex and usually caused by the combination of camera shakes, object motions, and depth variations. Optical flow can be used for kernel estimation since it predicts motion trajectories. However, the estimates are often inaccurate in complex scenes at object boundaries, which are crucial in kernel estimation. In this paper, we exploit semantic segmentation in each blurry frame to understand the scene contents and use different motion models for image regions to guide optical flow estimation. While existing pixel-wise blur models assume that the blur kernel is the same as optical flow during the exposure time, this assumption does not hold when the motion blur trajectory at a pixel is different from the estimated linear optical flow. We analyze the relationship between motion blur trajectory and optical flow, and present a novel pixel-wise non-linear kernel model to account for motion blur. The proposed blur model is based on the non-linear optical flow, which describes complex motion blur more effectively. Extensive experiments on challenging blurry videos demonstrate the proposed algorithm performs favorably against the state-of-the-art methods.
|Title of host publication||Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||9|
|Publication status||Published - 2017 Dec 22|
|Event||16th IEEE International Conference on Computer Vision, ICCV 2017 - Venice, Italy|
Duration: 2017 Oct 22 → 2017 Oct 29
|Name||Proceedings of the IEEE International Conference on Computer Vision|
|Other||16th IEEE International Conference on Computer Vision, ICCV 2017|
|Period||17/10/22 → 17/10/29|
Bibliographical noteFunding Information:
This work is supported in part by the National Key R&D Program of China (No. 2016YFB0800403), National Natural Science Foundation of China (No. 61422213, U1636214), Key Program of the Chinese Academy of Sciences (No. QYZDB-SSWJSC003). Ming-Hsuan Yang is supported in part by the NSF CAREER (No. 1149783), gifts from Adobe and Nvidia. Jinshan Pan is supported by the 973 Program (No. 2014CB347600), NSFC (No. 61522203), NSF of Jiangsu Province (No. BK20140058), National Key R&D Program of China (No. 2016YFB1001001).
© 2017 IEEE.
All Science Journal Classification (ASJC) codes
- Computer Vision and Pattern Recognition