Deblurring Dynamic Scenes via Spatially Varying Recurrent Neural Networks

Wenqi Ren, Jiawei Zhang, Jinshan Pan, Sifei Liu, Jimmy S. Ren, Junping Du, Xiaochun Cao, Ming Hsuan Yang

Research output: Contribution to journalArticlepeer-review

13 Citations (Scopus)

Abstract

Deblurring images captured in dynamic scenes is challenging as the motion blurs are spatially varying caused by camera shakes and object movements. In this paper, we propose a spatially varying neural network to deblur dynamic scenes. The proposed model is composed of three deep convolutional neural networks (CNNs) and a recurrent neural network (RNN). The RNN is used as a deconvolution operator on feature maps extracted from the input image by one of the CNNs. Another CNN is used to learn the spatially varying weights for the RNN. As a result, the RNN is spatial-aware and can implicitly model the deblurring process with spatially varying kernels. To better exploit properties of the spatially varying RNN, we develop both one-dimensional and two-dimensional RNNs for deblurring. The third component, based on a CNN, reconstructs the final deblurred feature maps into a restored image. In addition, the whole network is end-to-end trainable. Quantitative and qualitative evaluations on benchmark datasets demonstrate that the proposed method performs favorably against the state-of-the-art deblurring algorithms.

Original languageEnglish
Pages (from-to)3974-3987
Number of pages14
JournalIEEE transactions on pattern analysis and machine intelligence
Volume44
Issue number8
DOIs
Publication statusPublished - 2022 Aug 1

Bibliographical note

Publisher Copyright:
© 1979-2012 IEEE.

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Vision and Pattern Recognition
  • Computational Theory and Mathematics
  • Artificial Intelligence
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Deblurring Dynamic Scenes via Spatially Varying Recurrent Neural Networks'. Together they form a unique fingerprint.

Cite this