Abstract
Most existing tracking algorithms do not explicitly consider the motion blur contained in video sequences, which degrades their performance in real-world applications where motion blur often occurs. In this paper, we propose to solve the motion blur problem in visual tracking in a unified framework. Specifically, a joint blur state estimation and multi-task reverse sparse learning framework are presented, where the closed-form solution of blur kernel and sparse code matrix is obtained simultaneously. The reverse process considers the blurry candidates as dictionary elements, and sparsely represents blurred templates with the candidates. By utilizing the information contained in the sparse code matrix, an efficient likelihood model is further developed, which quickly excludes irrelevant candidates and narrows the particle scale down. Experimental results on the challenging benchmarks show that our method performs well against the state-of-the-art trackers.
Original language | English |
---|---|
Article number | 7585089 |
Pages (from-to) | 5867-5876 |
Number of pages | 10 |
Journal | IEEE Transactions on Image Processing |
Volume | 25 |
Issue number | 12 |
DOIs | |
Publication status | Published - 2016 Dec |
Bibliographical note
Funding Information:This work was supported in part by the National Natural Science Foundation of China under Grant 61472036 and Grant 61272359, in part by the National Basic Research Program of China (973 Program) under Grant 2013CB328805, in part by the Australian Research Council's Discovery Projects Funding Scheme under Grant DP150104645, and in part by the Specialized Fund for Joint Building Program of Beijing Municipal Education Commission.
Publisher Copyright:
© 2016 IEEE.
All Science Journal Classification (ASJC) codes
- Software
- Computer Graphics and Computer-Aided Design