Exploiting spatial-temporal locality of tracking via structured dictionary learning

Yao Sui, Guanghui Wang, Li Zhang, Ming Hsuan Yang

Research output: Contribution to journalArticle

8 Citations (Scopus)

Abstract

In this paper, a novel spatial-temporal locality is proposed and unified via a discriminative dictionary learning framework for visual tracking. By exploring the strong local correlations between temporally obtained target and their spatially distributed nearby background neighbors, a spatial-temporal locality is obtained. The locality is formulated as a subspace model and exploited under a unified structure of discriminative dictionary learning with a subspace structure. Using the learned dictionary, the target and its background can be described and distinguished effectively through their sparse codes. As a result, the target is localized by integrating both the descriptive and the discriminative qualities. Extensive experiments on various challenging video sequences demonstrate the superior performance of proposed algorithm over the other state-of-the-art approaches.

Original languageEnglish
Pages (from-to)1282-1296
Number of pages15
JournalIEEE Transactions on Image Processing
Volume27
Issue number3
DOIs
Publication statusPublished - 2018 Mar

Fingerprint

Glossaries
Experiments

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Graphics and Computer-Aided Design

Cite this

@article{3c89b70a1cd34c5a8d6dce5fee118ad5,
title = "Exploiting spatial-temporal locality of tracking via structured dictionary learning",
abstract = "In this paper, a novel spatial-temporal locality is proposed and unified via a discriminative dictionary learning framework for visual tracking. By exploring the strong local correlations between temporally obtained target and their spatially distributed nearby background neighbors, a spatial-temporal locality is obtained. The locality is formulated as a subspace model and exploited under a unified structure of discriminative dictionary learning with a subspace structure. Using the learned dictionary, the target and its background can be described and distinguished effectively through their sparse codes. As a result, the target is localized by integrating both the descriptive and the discriminative qualities. Extensive experiments on various challenging video sequences demonstrate the superior performance of proposed algorithm over the other state-of-the-art approaches.",
author = "Yao Sui and Guanghui Wang and Li Zhang and Yang, {Ming Hsuan}",
year = "2018",
month = "3",
doi = "10.1109/TIP.2017.2779275",
language = "English",
volume = "27",
pages = "1282--1296",
journal = "IEEE Transactions on Image Processing",
issn = "1057-7149",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "3",

}

Exploiting spatial-temporal locality of tracking via structured dictionary learning. / Sui, Yao; Wang, Guanghui; Zhang, Li; Yang, Ming Hsuan.

In: IEEE Transactions on Image Processing, Vol. 27, No. 3, 03.2018, p. 1282-1296.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Exploiting spatial-temporal locality of tracking via structured dictionary learning

AU - Sui, Yao

AU - Wang, Guanghui

AU - Zhang, Li

AU - Yang, Ming Hsuan

PY - 2018/3

Y1 - 2018/3

N2 - In this paper, a novel spatial-temporal locality is proposed and unified via a discriminative dictionary learning framework for visual tracking. By exploring the strong local correlations between temporally obtained target and their spatially distributed nearby background neighbors, a spatial-temporal locality is obtained. The locality is formulated as a subspace model and exploited under a unified structure of discriminative dictionary learning with a subspace structure. Using the learned dictionary, the target and its background can be described and distinguished effectively through their sparse codes. As a result, the target is localized by integrating both the descriptive and the discriminative qualities. Extensive experiments on various challenging video sequences demonstrate the superior performance of proposed algorithm over the other state-of-the-art approaches.

AB - In this paper, a novel spatial-temporal locality is proposed and unified via a discriminative dictionary learning framework for visual tracking. By exploring the strong local correlations between temporally obtained target and their spatially distributed nearby background neighbors, a spatial-temporal locality is obtained. The locality is formulated as a subspace model and exploited under a unified structure of discriminative dictionary learning with a subspace structure. Using the learned dictionary, the target and its background can be described and distinguished effectively through their sparse codes. As a result, the target is localized by integrating both the descriptive and the discriminative qualities. Extensive experiments on various challenging video sequences demonstrate the superior performance of proposed algorithm over the other state-of-the-art approaches.

UR - http://www.scopus.com/inward/record.url?scp=85037597963&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85037597963&partnerID=8YFLogxK

U2 - 10.1109/TIP.2017.2779275

DO - 10.1109/TIP.2017.2779275

M3 - Article

C2 - 29990191

AN - SCOPUS:85037597963

VL - 27

SP - 1282

EP - 1296

JO - IEEE Transactions on Image Processing

JF - IEEE Transactions on Image Processing

SN - 1057-7149

IS - 3

ER -