Most prior approaches to the problem of stereoscopic 3D (S3D) visual discomfort prediction (VDP) have focused on the extraction of perceptually meaningful handcrafted features based on models of visual perception and of natural depth statistics. Toward advancing performance on this problem, we have developed a deep learning-based VDP model named deep visual discomfort predictor (DeepVDP). The DeepVDP uses a convolutional neural network (CNN) to learn features that are highly predictive of experienced visual discomfort. Since a large amount of reference data is needed to train a CNN, we develop a systematic way of dividing the S3D image into local regions defined as patches and model a patch-based CNN using two sequential training steps. Since it is very difficult to obtain human opinions on each patch, instead a proxy ground-truth label that is generated by an existing S3D visual discomfort prediction algorithm called 3D-VDP is assigned to each patch. These proxy ground-truth labels are used to conduct the first stage of training the CNN. In the second stage, the automatically learned local abstractions are aggregated into global features via a feature aggregation layer. The learned features are iteratively updated via supervised learning on subjective 3D discomfort scores, which serve as ground-truth labels on each S3D image. The patch-based CNN model that has been pretrained on proxy ground-truth labels is subsequently retrained on true global subjective scores. The global S3D visual discomfort scores predicted by the trained DeepVDP model achieve the state-of-the-art performance as compared with previous VDP algorithms.
|Number of pages||13|
|Journal||IEEE Transactions on Image Processing|
|Publication status||Published - 2018 Nov|
Bibliographical noteFunding Information:
Manuscript received November 3, 2017; revised March 9, 2018 and June 6, 2018; accepted June 14, 2018. Date of publication June 29, 2018; date of current version August 14, 2018. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea Government (MIST) (No. 2016R1A2B2014525). The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Aljosa Smolic. (Corresponding author: Sanghoon Lee.) H. Oh, S. Ahn, and S. Lee are with the Department of Electrical and Electronics Engineering, Yonsei University, Seoul 120-749, South Korea (e-mail: firstname.lastname@example.org; email@example.com; firstname.lastname@example.org).
© 1992-2012 IEEE.
All Science Journal Classification (ASJC) codes
- Computer Graphics and Computer-Aided Design