Dual Convolutional Neural Networks for Low-Level Vision

Jinshan Pan, Deqing Sun, Jiawei Zhang, Jinhui Tang, Jian Yang, Yu Wing Tai, Ming Hsuan Yang

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)


We propose a general dual convolutional neural network (DualCNN) for low-level vision problems, e.g., super-resolution, edge-preserving filtering, deraining, and dehazing. These problems usually involve estimating two components of the target signals: structures and details. Motivated by this, we design the proposed DualCNN to have two parallel branches, which respectively recovers the structures and details in an end-to-end manner. The recovered structures and details can generate desired signals according to the formation model for each particular application. The DualCNN is a flexible framework for low-level vision tasks and can be easily incorporated into existing CNNs. Experimental results show that the DualCNN can be effectively applied to numerous low-level vision tasks with favorable performance against the state-of-the-art methods that have been specially designed for each individual task.

Original languageEnglish
Pages (from-to)1440-1458
Number of pages19
JournalInternational Journal of Computer Vision
Issue number6
Publication statusPublished - 2022 Jun

Bibliographical note

Funding Information:
This work is supported in part by the National Key Research and Development Program of China under Grant 2018AAA0102001, the National Natural Science Foundation of China under Grants 61872421, 61922043, and 61925204, the Fundamental Research Funds for the Central Universities under Grant 30920041109, and NSF CAREER under Grant 1149783.

Publisher Copyright:
© 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence


Dive into the research topics of 'Dual Convolutional Neural Networks for Low-Level Vision'. Together they form a unique fingerprint.

Cite this