The past decade has witnessed significant progress in object detection and tracking in videos. In this paper, we present a collaborative model between a pre-trained object detector and a number of single-object online trackers within the particle filtering framework. For each frame, we construct an association between detections and trackers, and treat each detected image region as a key sample, for online update, if it is associated to a tracker. We present a motion model that incorporates the associated detections with object dynamics. Furthermore, we propose an effective sample selection scheme to update the appearance model of each tracker. We use discriminative and generative appearance models for the likelihood function and data association, respectively. Experimental results show that the proposed scheme generally outperforms state-of-the-art methods.
Bibliographical noteFunding Information:
The authors would like to thank Dr. Y. Wu for his helpful discussions and suggestions. They would also like to thank all the authors that made their codes available for comparison of the proposed algorithm with theirs and the anonymous reviewers for their constructive comments and suggestions. M.A. Naiel would like to acknowledge the support from Concordia University to conduct this research. This work is supported by research grants from the Natural Sciences and Engineering Research Council (NSERC) of Canada and the Regroupement Strat?geique en Microsyst?mes du Qu?bec (ReSMiQ) awarded to M.O. Ahmad and M.N.S. Swamy. J. Lim is supported by the National Research Foundation (NRF) of Korea grant #2014R1A1A2058501. M.-H. Yang is supported in part by the National Science Foundation (NSF) CAREER grant #1149783 and a gift from Panasonic.
All Science Journal Classification (ASJC) codes
- Signal Processing
- Computer Vision and Pattern Recognition