Deep CNN-Based Blind Image Quality Predictor

Jongyoo Kim, Anh Duc Nguyen, Sanghoon Lee

Research output: Contribution to journalArticle

15 Citations (Scopus)

Abstract

Image recognition based on convolutional neural networks (CNNs) has recently been shown to deliver the state-of-the-art performance in various areas of computer vision and image processing. Nevertheless, applying a deep CNN to no-reference image quality assessment (NR-IQA) remains a challenging task due to critical obstacles, i.e., the lack of a training database. In this paper, we propose a CNN-based NR-IQA framework that can effectively solve this problem. The proposed method - deep image quality assessor (DIQA) - separates the training of NR-IQA into two stages: 1) an objective distortion part and 2) a human visual system-related part. In the first stage, the CNN learns to predict the objective error map, and then the model learns to predict subjective score in the second stage. To complement the inaccuracy of the objective error map prediction on the homogeneous region, we also propose a reliability map. Two simple handcrafted features were additionally employed to further enhance the accuracy. In addition, we propose a way to visualize perceptual error maps to analyze what was learned by the deep CNN model. In the experiments, the DIQA yielded the state-of-the-art accuracy on the various databases.

Original languageEnglish
Article number8383698
Pages (from-to)11-24
Number of pages14
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume30
Issue number1
DOIs
Publication statusPublished - 2019 Jan

Fingerprint

Image quality
Neural networks
Image recognition
Computer vision
Image processing
Experiments

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

Cite this

@article{e0209250e68541aaada0f3062934b3f2,
title = "Deep CNN-Based Blind Image Quality Predictor",
abstract = "Image recognition based on convolutional neural networks (CNNs) has recently been shown to deliver the state-of-the-art performance in various areas of computer vision and image processing. Nevertheless, applying a deep CNN to no-reference image quality assessment (NR-IQA) remains a challenging task due to critical obstacles, i.e., the lack of a training database. In this paper, we propose a CNN-based NR-IQA framework that can effectively solve this problem. The proposed method - deep image quality assessor (DIQA) - separates the training of NR-IQA into two stages: 1) an objective distortion part and 2) a human visual system-related part. In the first stage, the CNN learns to predict the objective error map, and then the model learns to predict subjective score in the second stage. To complement the inaccuracy of the objective error map prediction on the homogeneous region, we also propose a reliability map. Two simple handcrafted features were additionally employed to further enhance the accuracy. In addition, we propose a way to visualize perceptual error maps to analyze what was learned by the deep CNN model. In the experiments, the DIQA yielded the state-of-the-art accuracy on the various databases.",
author = "Jongyoo Kim and Nguyen, {Anh Duc} and Sanghoon Lee",
year = "2019",
month = "1",
doi = "10.1109/TNNLS.2018.2829819",
language = "English",
volume = "30",
pages = "11--24",
journal = "IEEE Transactions on Neural Networks and Learning Systems",
issn = "2162-237X",
publisher = "IEEE Computational Intelligence Society",
number = "1",

}

Deep CNN-Based Blind Image Quality Predictor. / Kim, Jongyoo; Nguyen, Anh Duc; Lee, Sanghoon.

In: IEEE Transactions on Neural Networks and Learning Systems, Vol. 30, No. 1, 8383698, 01.2019, p. 11-24.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Deep CNN-Based Blind Image Quality Predictor

AU - Kim, Jongyoo

AU - Nguyen, Anh Duc

AU - Lee, Sanghoon

PY - 2019/1

Y1 - 2019/1

N2 - Image recognition based on convolutional neural networks (CNNs) has recently been shown to deliver the state-of-the-art performance in various areas of computer vision and image processing. Nevertheless, applying a deep CNN to no-reference image quality assessment (NR-IQA) remains a challenging task due to critical obstacles, i.e., the lack of a training database. In this paper, we propose a CNN-based NR-IQA framework that can effectively solve this problem. The proposed method - deep image quality assessor (DIQA) - separates the training of NR-IQA into two stages: 1) an objective distortion part and 2) a human visual system-related part. In the first stage, the CNN learns to predict the objective error map, and then the model learns to predict subjective score in the second stage. To complement the inaccuracy of the objective error map prediction on the homogeneous region, we also propose a reliability map. Two simple handcrafted features were additionally employed to further enhance the accuracy. In addition, we propose a way to visualize perceptual error maps to analyze what was learned by the deep CNN model. In the experiments, the DIQA yielded the state-of-the-art accuracy on the various databases.

AB - Image recognition based on convolutional neural networks (CNNs) has recently been shown to deliver the state-of-the-art performance in various areas of computer vision and image processing. Nevertheless, applying a deep CNN to no-reference image quality assessment (NR-IQA) remains a challenging task due to critical obstacles, i.e., the lack of a training database. In this paper, we propose a CNN-based NR-IQA framework that can effectively solve this problem. The proposed method - deep image quality assessor (DIQA) - separates the training of NR-IQA into two stages: 1) an objective distortion part and 2) a human visual system-related part. In the first stage, the CNN learns to predict the objective error map, and then the model learns to predict subjective score in the second stage. To complement the inaccuracy of the objective error map prediction on the homogeneous region, we also propose a reliability map. Two simple handcrafted features were additionally employed to further enhance the accuracy. In addition, we propose a way to visualize perceptual error maps to analyze what was learned by the deep CNN model. In the experiments, the DIQA yielded the state-of-the-art accuracy on the various databases.

UR - http://www.scopus.com/inward/record.url?scp=85048558863&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85048558863&partnerID=8YFLogxK

U2 - 10.1109/TNNLS.2018.2829819

DO - 10.1109/TNNLS.2018.2829819

M3 - Article

C2 - 29994270

AN - SCOPUS:85048558863

VL - 30

SP - 11

EP - 24

JO - IEEE Transactions on Neural Networks and Learning Systems

JF - IEEE Transactions on Neural Networks and Learning Systems

SN - 2162-237X

IS - 1

M1 - 8383698

ER -