Abstract
Deblurring images captured in dynamic scenes is challenging as the motion blurs are spatially varying caused by camera shakes and object movements. In this paper, we propose a spatially varying neural network to deblur dynamic scenes. The proposed model is composed of three deep convolutional neural networks (CNNs) and a recurrent neural network (RNN). The RNN is used as a deconvolution operator on feature maps extracted from the input image by one of the CNNs. Another CNN is used to learn the spatially varying weights for the RNN. As a result, the RNN is spatial-aware and can implicitly model the deblurring process with spatially varying kernels. To better exploit properties of the spatially varying RNN, we develop both one-dimensional and two-dimensional RNNs for deblurring. The third component, based on a CNN, reconstructs the final deblurred feature maps into a restored image. In addition, the whole network is end-to-end trainable. Quantitative and qualitative evaluations on benchmark datasets demonstrate that the proposed method performs favorably against the state-of-the-art deblurring algorithms.
Original language | English |
---|---|
Pages (from-to) | 3974-3987 |
Number of pages | 14 |
Journal | IEEE transactions on pattern analysis and machine intelligence |
Volume | 44 |
Issue number | 8 |
DOIs | |
Publication status | Published - 2022 Aug 1 |
Bibliographical note
Publisher Copyright:© 1979-2012 IEEE.
All Science Journal Classification (ASJC) codes
- Software
- Computer Vision and Pattern Recognition
- Computational Theory and Mathematics
- Artificial Intelligence
- Applied Mathematics