The high performance video quality assessment (VQA) algorithm is a necessary skill to provide high quality video to viewers. However, since the nonlinear perception function between the distortion level of the video and the subjective quality score is not precisely defined, there are many limitations in accurately predicting the quality of the video. In this paper, we propose a deep learning scheme named Deep Blind Video Quality Assessment (DeepBVQA) to achieve a more accurate and reliable video quality predictor by considering various spatial and temporal cues which have not been considered before. We used CNN to extract the spatial cues of each video in VQA and proposed new hand-crafted features for temporal cues. Performance experiments show that performance is better than other state-of-the-art no-reference (NR) VQA models and the introduction of hand-crafted temporal features is very efficient in VQA.
|Title of host publication||2018 IEEE International Conference on Image Processing, ICIP 2018 - Proceedings|
|Publisher||IEEE Computer Society|
|Number of pages||5|
|Publication status||Published - 2018 Aug 29|
|Event||25th IEEE International Conference on Image Processing, ICIP 2018 - Athens, Greece|
Duration: 2018 Oct 7 → 2018 Oct 10
|Name||Proceedings - International Conference on Image Processing, ICIP|
|Conference||25th IEEE International Conference on Image Processing, ICIP 2018|
|Period||18/10/7 → 18/10/10|
Bibliographical noteFunding Information:
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2016R1A2B2014525).
© 2018 IEEE.
All Science Journal Classification (ASJC) codes
- Computer Vision and Pattern Recognition
- Signal Processing