Reliable Perceptual Loss Computation for GAN-Based Super-Resolution with Edge Texture Metric

J. Kim, C. Lee

Research output: Contribution to journalArticlepeer-review


Super-resolution (SR) is an ill-posed problem. Generating high-resolution (HR) images from low-resolution (LR) images remains a major challenge. Recently, SR methods based on deep convolutional neural networks (DCN) have been developed with impressive performance improvement. DCN-based SR techniques can be largely divided into peak signal-to-noise ratio (PSNR)-oriented SR networks and generative adversarial networks (GAN)-based SR networks. In most current GAN-based SR networks, the perceptual loss is computed from the feature maps of a single layer or several fixed layers using a differentiable feature extractor such as VGG. This limited layer utilization may produce overly textured artifacts. In this paper, a new edge texture metric (ETM) is proposed to quantify the characteristics of images and then it is utilized only in the training phase to select an appropriate layer when calculating the perceptual loss. We present experimental results showing that the GAN-based SR network trained with the proposed method achieves qualitative and quantitative perceptual quality improvements compared to many of the existing methods.

Original languageEnglish
Article number9524635
Pages (from-to)120127-120137
Number of pages11
JournalIEEE Access
Publication statusPublished - 2021

Bibliographical note

Funding Information:
This work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology under Grant NRF-2020R1A2C1012221.

Publisher Copyright:
© 2013 IEEE.

All Science Journal Classification (ASJC) codes

  • Computer Science(all)
  • Materials Science(all)
  • Engineering(all)
  • Electrical and Electronic Engineering


Dive into the research topics of 'Reliable Perceptual Loss Computation for GAN-Based Super-Resolution with Edge Texture Metric'. Together they form a unique fingerprint.

Cite this