Abstract
Single-image super-resolution, i.e., reconstructing a high-resolution image from a lowresolution image, is a critical concern in many computer vision applications. Recent deep learning-based image super-resolution methods employ massive numbers of model parameters to obtain quality gain. However, this leads to increased model size and high computational complexity. To mitigate this, some methods employ recursive parameter-sharing for better parameter efficiency. Nevertheless, their designs do not adequately exploit the potential of the recursive operation. In this paper, we propose a novel superresolution method, called a volatile-nonvolatile memory network (VMNet), to maximize the usefulness of the recursive architecture. Specifically, we design two central components called volatile and nonvolatile memories. By means of these, the recursive feature extraction portion of our model performs effective recursive operations that gradually enhance image quality. Through extensive experiments on x2, x3, and x4 super-resolution tasks, we demonstrate that our method outperforms existing state-of-the-art methods in terms of image quality and complexity via stable progressive super-resolution.
Original language | English |
---|---|
Pages (from-to) | 37487-37496 |
Number of pages | 10 |
Journal | IEEE Access |
Volume | 9 |
DOIs | |
Publication status | Published - 2021 |
Bibliographical note
Funding Information:This work was supported in part by the Artificial Intelligence Graduate School Program, Yonsei University under Grant 2020-0-01361, and in part by the Ministry of Trade, Industry and Energy (MOTIE) under Grant P0014268.
Publisher Copyright:
© 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
All Science Journal Classification (ASJC) codes
- Computer Science(all)
- Materials Science(all)
- Engineering(all)