Exploiting spatial-temporal locality of tracking via structured dictionary learning

Yao Sui, Guanghui Wang, Li Zhang, Ming Hsuan Yang

Research output: Contribution to journalArticle

12 Citations (Scopus)


In this paper, a novel spatial-temporal locality is proposed and unified via a discriminative dictionary learning framework for visual tracking. By exploring the strong local correlations between temporally obtained target and their spatially distributed nearby background neighbors, a spatial-temporal locality is obtained. The locality is formulated as a subspace model and exploited under a unified structure of discriminative dictionary learning with a subspace structure. Using the learned dictionary, the target and its background can be described and distinguished effectively through their sparse codes. As a result, the target is localized by integrating both the descriptive and the discriminative qualities. Extensive experiments on various challenging video sequences demonstrate the superior performance of proposed algorithm over the other state-of-the-art approaches.

Original languageEnglish
Pages (from-to)1282-1296
Number of pages15
JournalIEEE Transactions on Image Processing
Issue number3
Publication statusPublished - 2018 Mar

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Graphics and Computer-Aided Design

Fingerprint Dive into the research topics of 'Exploiting spatial-temporal locality of tracking via structured dictionary learning'. Together they form a unique fingerprint.

  • Cite this