Motion/disparity-compensated multiview sequence coding

Research output: Contribution to journalArticle

3 Citations (Scopus)

Abstract

A multiview sequence coding technique is proposed in this paper. A multiview sequence encoder must be able to reduce redundancies in both the time and view domains because the amount of data increases as the number of views (or cameras) increases. We define a new coding structure, a group of a group of pictures (GGOP), which is compatible with MPEG-2 and is flexible with a baseline distance. The GGOP could have several possible types, e.g., one-I type, two-I type, etc., according to the number of reference frame sequences. This permits one of these types to be selected according to the baseline distance among the cameras. In addition, the GGOP is designed for MPEG-2 compatibility. The proposed multiview sequence encoder consists of preprocessing stages, disparity estimation/compensation, motion estimation/compensation, residual coding, rate control, and entropy coding. It generates two types of bit streams, the main bit stream and the auxiliary bit stream. The main bit stream contains information concerning reference sequences including I-pictures, in order to maintain MPEG-2 compatibility. The auxiliary bit stream contains information concerning the remaining multiview sequences except for the sequences that include I-pictures. The proposed encoder shows improvements in compression ratio as well as in peak signal-to-noise ratio (PSNR) compared with conventional methods. The dependency of the GGOP on the baseline distance between cameras is also confirmed.

Original languageEnglish
Pages (from-to)123-141
Number of pages19
JournalCircuits, Systems, and Signal Processing
Volume23
Issue number2
DOIs
Publication statusPublished - 2004 Mar 1

All Science Journal Classification (ASJC) codes

  • Signal Processing
  • Applied Mathematics

Fingerprint Dive into the research topics of 'Motion/disparity-compensated multiview sequence coding'. Together they form a unique fingerprint.

  • Cite this