Deep learning of human visual sensitivity in image quality assessment framework

Jongyoo Kim, Sanghoon Lee

Research output: Chapter in Book/Report/Conference proceedingConference contribution

41 Citations (Scopus)

Abstract

Since human observers are the ultimate receivers of digital images, image quality metrics should be designed from a human-oriented perspective. Conventionally, a number of full-reference image quality assessment (FR-IQA) methods adopted various computational models of the human visual system (HVS) from psychological vision science research. In this paper, we propose a novel convolutional neural networks (CNN) based FR-IQA model, named Deep Image Quality Assessment (DeepQA), where the behavior of the HVS is learned from the underlying data distribution of IQA databases. Different from previous studies, our model seeks the optimal visual weight based on understanding of database information itself without any prior knowledge of the HVS. Through the experiments, we show that the predicted visual sensitivity maps agree with the human subjective opinions. In addition, DeepQA achieves the state-of-the-art prediction accuracy among FR-IQA models.

Original languageEnglish
Title of host publicationProceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1969-1977
Number of pages9
ISBN (Electronic)9781538604571
DOIs
Publication statusPublished - 2017 Nov 6
Event30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 - Honolulu, United States
Duration: 2017 Jul 212017 Jul 26

Publication series

NameProceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017
Volume2017-January

Other

Other30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017
CountryUnited States
CityHonolulu
Period17/7/2117/7/26

Fingerprint

Image quality
Deep learning
Neural networks
Experiments

All Science Journal Classification (ASJC) codes

  • Signal Processing
  • Computer Vision and Pattern Recognition

Cite this

Kim, J., & Lee, S. (2017). Deep learning of human visual sensitivity in image quality assessment framework. In Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 (pp. 1969-1977). (Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017; Vol. 2017-January). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/CVPR.2017.213
Kim, Jongyoo ; Lee, Sanghoon. / Deep learning of human visual sensitivity in image quality assessment framework. Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017. Institute of Electrical and Electronics Engineers Inc., 2017. pp. 1969-1977 (Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017).
@inproceedings{a2a24b8b0e3446cdbdac4eea9ea09234,
title = "Deep learning of human visual sensitivity in image quality assessment framework",
abstract = "Since human observers are the ultimate receivers of digital images, image quality metrics should be designed from a human-oriented perspective. Conventionally, a number of full-reference image quality assessment (FR-IQA) methods adopted various computational models of the human visual system (HVS) from psychological vision science research. In this paper, we propose a novel convolutional neural networks (CNN) based FR-IQA model, named Deep Image Quality Assessment (DeepQA), where the behavior of the HVS is learned from the underlying data distribution of IQA databases. Different from previous studies, our model seeks the optimal visual weight based on understanding of database information itself without any prior knowledge of the HVS. Through the experiments, we show that the predicted visual sensitivity maps agree with the human subjective opinions. In addition, DeepQA achieves the state-of-the-art prediction accuracy among FR-IQA models.",
author = "Jongyoo Kim and Sanghoon Lee",
year = "2017",
month = "11",
day = "6",
doi = "10.1109/CVPR.2017.213",
language = "English",
series = "Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "1969--1977",
booktitle = "Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017",
address = "United States",

}

Kim, J & Lee, S 2017, Deep learning of human visual sensitivity in image quality assessment framework. in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017. Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, vol. 2017-January, Institute of Electrical and Electronics Engineers Inc., pp. 1969-1977, 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, United States, 17/7/21. https://doi.org/10.1109/CVPR.2017.213

Deep learning of human visual sensitivity in image quality assessment framework. / Kim, Jongyoo; Lee, Sanghoon.

Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017. Institute of Electrical and Electronics Engineers Inc., 2017. p. 1969-1977 (Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017; Vol. 2017-January).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Deep learning of human visual sensitivity in image quality assessment framework

AU - Kim, Jongyoo

AU - Lee, Sanghoon

PY - 2017/11/6

Y1 - 2017/11/6

N2 - Since human observers are the ultimate receivers of digital images, image quality metrics should be designed from a human-oriented perspective. Conventionally, a number of full-reference image quality assessment (FR-IQA) methods adopted various computational models of the human visual system (HVS) from psychological vision science research. In this paper, we propose a novel convolutional neural networks (CNN) based FR-IQA model, named Deep Image Quality Assessment (DeepQA), where the behavior of the HVS is learned from the underlying data distribution of IQA databases. Different from previous studies, our model seeks the optimal visual weight based on understanding of database information itself without any prior knowledge of the HVS. Through the experiments, we show that the predicted visual sensitivity maps agree with the human subjective opinions. In addition, DeepQA achieves the state-of-the-art prediction accuracy among FR-IQA models.

AB - Since human observers are the ultimate receivers of digital images, image quality metrics should be designed from a human-oriented perspective. Conventionally, a number of full-reference image quality assessment (FR-IQA) methods adopted various computational models of the human visual system (HVS) from psychological vision science research. In this paper, we propose a novel convolutional neural networks (CNN) based FR-IQA model, named Deep Image Quality Assessment (DeepQA), where the behavior of the HVS is learned from the underlying data distribution of IQA databases. Different from previous studies, our model seeks the optimal visual weight based on understanding of database information itself without any prior knowledge of the HVS. Through the experiments, we show that the predicted visual sensitivity maps agree with the human subjective opinions. In addition, DeepQA achieves the state-of-the-art prediction accuracy among FR-IQA models.

UR - http://www.scopus.com/inward/record.url?scp=85029326170&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85029326170&partnerID=8YFLogxK

U2 - 10.1109/CVPR.2017.213

DO - 10.1109/CVPR.2017.213

M3 - Conference contribution

AN - SCOPUS:85029326170

T3 - Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017

SP - 1969

EP - 1977

BT - Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017

PB - Institute of Electrical and Electronics Engineers Inc.

ER -

Kim J, Lee S. Deep learning of human visual sensitivity in image quality assessment framework. In Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017. Institute of Electrical and Electronics Engineers Inc. 2017. p. 1969-1977. (Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017). https://doi.org/10.1109/CVPR.2017.213