Transferring visual prior for online object tracking

Qing Wang, Feng Chen, Jimei Yang, Wenli Xu, Ming Hsuan Yang

Research output: Contribution to journalArticle

82 Citations (Scopus)

Abstract

Visual prior from generic real-world images can be learned and transferred for representing objects in a scene. Motivated by this, we propose an algorithm that transfers visual prior learned offline for online object tracking. From a collection of real-world images, we learn an overcomplete dictionary to represent visual prior. The prior knowledge of objects is generic, and the training image set does not necessarily contain any observation of the target object. During the tracking process, the learned visual prior is transferred to construct an object representation by sparse coding and multiscale max pooling. With this representation, a linear classifier is learned online to distinguish the target from the background and to account for the target and background appearance variations over time. Tracking is then carried out within a Bayesian inference framework, in which the learned classifier is used to construct the observation model and a particle filter is used to estimate the tracking result sequentially. Experiments on a variety of challenging sequences with comparisons to several state-of-the-art methods demonstrate that more robust object tracking can be achieved by transferring visual prior.

Original languageEnglish
Article number6178278
Pages (from-to)3296-3305
Number of pages10
JournalIEEE Transactions on Image Processing
Volume21
Issue number7
DOIs
Publication statusPublished - 2012 Jul 1

    Fingerprint

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Graphics and Computer-Aided Design

Cite this