One of the most challenging ongoing issues in the field of 3D visual research is how to interpret human 3D perception over virtual 3D space between the human eye and a 3D display. When a human being perceives a 3D structure, the brain classifies the scene into the binocular or monocular vision region depending on the availability of binocular depth perception in the unit of a certain region (coarse 3D perception). The details of the scene are then perceived by applying visual sensitivity to the classified 3D structure (fine 3D perception) with reference to the fixation. Furthermore, we include the coarse and fine 3D perception in the quality assessment, and propose a human 3D Perception-based Stereo image quality pooling (3DPS) model. In 3DPS we divide the stereo image into segment units, and classify each segment as either the binocular or monocular vision region. We assess the stereo image according to the classification by applying different visual weights to the pooling method to achieve more accurate quality assessment. In particular, it is demonstrated that 3DPS performs remarkably for quality assessment of stereo images distorted by coding and transmission errors.
|Number of pages||13|
|Journal||IEEE Journal on Selected Topics in Signal Processing|
|Publication status||Published - 2015 Apr 1|
All Science Journal Classification (ASJC) codes
- Signal Processing
- Electrical and Electronic Engineering