Visual tracking via Boolean map representations

Kaihua Zhang, Qingshan Liu, Jian Yang, Ming Hsuan Yang

Research output: Contribution to journalArticlepeer-review

27 Citations (Scopus)

Abstract

In this paper, we present a simple yet effective Boolean map based representation that exploits connectivity cues for visual tracking. We describe a target object with histogram of oriented gradients and raw color features, of which each one is characterized by a set of Boolean maps generated by uniformly thresholding their values. The Boolean maps effectively encode multi-scale connectivity cues of the target with different granularities. The fine-grained Boolean maps capture spatially structural details that are effective for precise target localization while the coarse-grained ones encode global shape information that are robust to large target appearance variations. Finally, all the Boolean maps form together a robust representation that can be approximated by an explicit feature map of the intersection kernel, which is fed into a logistic regression classifier with online update, and the target location is estimated within a particle filter framework. The proposed representation scheme is computationally efficient and facilitates achieving favorable performance in terms of accuracy and robustness against the state-of-the-art tracking methods on the OTB50 and VOT2016 benchmark datasets.

Original languageEnglish
Pages (from-to)147-160
Number of pages14
JournalPattern Recognition
Volume81
DOIs
Publication statusPublished - 2018 Sep

Bibliographical note

Funding Information:
This work is supported in part by Natural Science Foundation of Jiangsu Province under grant BK20151529 and under grant BK20170040 , in part by and Six talent peaks project in Jiangsu Province under Grant R2017L07, in part by NSFC under grand 61532009 , in part by the NSFC under Grant Nos. U1713208 and 61472187 , the 973 Program No.2014CB349303, and Program for Changjiang Scholars.

Funding Information:
Ming-Hsuan Yang is a professor in Electrical Engineering and Computer Science at University of California, Merced. He received the Ph.D. degree in computer science from the University of Illinois at Urbana-Champaign in 2000. Prior to joining UC Merced in 2008, he was a senior research scientist at the Honda Research Institute working on vision problems related to humanoid robots. He coauthored the book Face Detection and Gesture Recognition for Human-Computer Interaction (Kluwer Academic 2001) and edited special issue on face recognition for Computer Vision and Image Understanding in 2003, and a special issue on real world face recognition for IEEE Transactions on Pattern Analysis and Machine Intelligence. Yang served as an associate editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence from 2007 to 2011, and is an associate editor of the International Journal of Computer Vision, Image and Vision Computing and Journal of Artificial Intelligence Research. He received the NSF CAREER award in 2012, the Senate Award for Distinguished Early Career Research at UC Merced in 2011, and the Google Faculty Award in 2009. He is a senior member of the IEEE and the ACM.

All Science Journal Classification (ASJC) codes

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Visual tracking via Boolean map representations'. Together they form a unique fingerprint.

Cite this