Perceptually scalable extension of H.264

Hojin Ha, Jincheol Park, Sanghoon Lee, Alan Conrad Bovik

Research output: Contribution to journalArticle

8 Citations (Scopus)


We propose a novel visual scalable video coding (VSVC) framework, named VSVC H.264/AVC. In this approach, the non-uniform sampling characteristic of the human eye is used to modify scalable video coding (SVC) H.264/AVC. We exploit the visibility of video content and the scalability of the video codec to achieve optimal subjective visual quality given limited system resources. To achieve the largest coding gain with controlled perceptual quality degradation, a perceptual weighting scheme is deployed wherein the compressed video is weighted as a function of visual saliency and of the non-uniform distribution of retinal photoreceptors. We develop a resource allocation algorithm emphasizing both efficiency and fairness by controlling the size of the salient region in each quality layer. Efficiency is emphasized on the low quality layer of the SVC. The bits saved by eliminating perceptual redundancy in regions of low interest are allocated to lower block-level distortions in salient regions. Fairness is enforced on the higher quality layers by enlarging the size of the salient regions. The simulation results show that the proposed VSVC framework significantly improves the subjective visual quality of compressed videos.

Original languageEnglish
Article number5739513
Pages (from-to)1667-1678
Number of pages12
JournalIEEE Transactions on Circuits and Systems for Video Technology
Issue number11
Publication statusPublished - 2011 Nov 1


All Science Journal Classification (ASJC) codes

  • Media Technology
  • Electrical and Electronic Engineering

Cite this