The direct use of a deep convolutional neural network (CNN) in no-reference image quality assessment (NR-IQA) usually struggles for a good performance due to a lack of training data, which can be alleviated by transfer learning. However, depending on the similarity between the source and target tasks, the final performance differs vastly. In particular, various kinds of distortion types exist in IQA, which requires different kinds of features to predict visual quality. In this paper, to make the transferred model robust to various distortion types, we propose a Multiple-level Feature-based Image Quality Assessor (MFIQA) which considers multiple levels of features simultaneously. Through rigorous experiments, we prove that MFIQA consistently yields state-of-the-art performance regardless of the distortion types including synthetic and authentic corruption.
|Title of host publication||2018 IEEE International Conference on Image Processing, ICIP 2018 - Proceedings|
|Publisher||IEEE Computer Society|
|Number of pages||5|
|Publication status||Published - 2018 Aug 29|
|Event||25th IEEE International Conference on Image Processing, ICIP 2018 - Athens, Greece|
Duration: 2018 Oct 7 → 2018 Oct 10
|Name||Proceedings - International Conference on Image Processing, ICIP|
|Conference||25th IEEE International Conference on Image Processing, ICIP 2018|
|Period||18/10/7 → 18/10/10|
Bibliographical noteFunding Information:
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2016R1A2B2014525).
© 2018 IEEE.
All Science Journal Classification (ASJC) codes
- Computer Vision and Pattern Recognition
- Signal Processing