Semantic-Driven Generation of Hyperlapse from 360 Degree Video

Wei Sheng Lai, Yujia Huang, Neel Joshi, Christopher Buehler, Ming Hsuan Yang, Sing Bing Kang

Research output: Contribution to journalArticlepeer-review

20 Citations (Scopus)

Abstract

We present a system for converting a fully panoramic (360 degree) video into a normal field-of-view (NFOV) hyperlapse for an optimal viewing experience. Our system exploits visual saliency and semantics to non-uniformly sample in space and time for generating hyperlapses. In addition, users can optionally choose objects of interest for customizing the hyperlapses. We first stabilize an input 360 degree video by smoothing the rotation between adjacent frames and then compute regions of interest and saliency scores. An initial hyperlapse is generated by optimizing the saliency and motion smoothness followed by the saliency-aware frame selection. We further smooth the result using an efficient 2D video stabilization approach that adaptively selects the motion model to generate the final hyperlapse. We validate the design of our system by showing results for a variety of scenes and comparing against the state-of-the-art method through a large-scale user study.

Original languageEnglish
Article number8031049
Pages (from-to)2610-2621
Number of pages12
JournalIEEE Transactions on Visualization and Computer Graphics
Volume24
Issue number9
DOIs
Publication statusPublished - 2018 Sep 1

Bibliographical note

Publisher Copyright:
© 1995-2012 IEEE.

All Science Journal Classification (ASJC) codes

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Computer Graphics and Computer-Aided Design

Fingerprint Dive into the research topics of 'Semantic-Driven Generation of Hyperlapse from 360 Degree Video'. Together they form a unique fingerprint.

Cite this