Hedged Deep Tracking

Yuankai Qi, Shengping Zhang, Lei Qin, Hongxun Yao, Qingming Huang, Jongwoo Lim, Ming Hsuan Yang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

317 Citations (Scopus)

Abstract

In recent years, several methods have been developed to utilize hierarchical features learned from a deep convolutional neural network (CNN) for visual tracking. However, as features from a certain CNN layer characterize an object of interest from only one aspect or one level, the performance of such trackers trained with features from one layer (usually the second to last layer) can be further improved. In this paper, we propose a novel CNN based tracking framework, which takes full advantage of features from different CNN layers and uses an adaptive Hedge method to hedge several CNN based trackers into a single stronger one. Extensive experiments on a benchmark dataset of 100 challenging image sequences demonstrate the effectiveness of the proposed algorithm compared to several state-of-theart trackers.

Original languageEnglish
Title of host publicationProceedings - 29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016
PublisherIEEE Computer Society
Pages4303-4311
Number of pages9
ISBN (Electronic)9781467388504
DOIs
Publication statusPublished - 2016 Dec 9
Event29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016 - Las Vegas, United States
Duration: 2016 Jun 262016 Jul 1

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Volume2016-December
ISSN (Print)1063-6919

Conference

Conference29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016
CountryUnited States
CityLas Vegas
Period16/6/2616/7/1

Fingerprint

Neural networks
Network layers
Experiments

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Vision and Pattern Recognition

Cite this

Qi, Y., Zhang, S., Qin, L., Yao, H., Huang, Q., Lim, J., & Yang, M. H. (2016). Hedged Deep Tracking. In Proceedings - 29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016 (pp. 4303-4311). [7780835] (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition; Vol. 2016-December). IEEE Computer Society. https://doi.org/10.1109/CVPR.2016.466
Qi, Yuankai ; Zhang, Shengping ; Qin, Lei ; Yao, Hongxun ; Huang, Qingming ; Lim, Jongwoo ; Yang, Ming Hsuan. / Hedged Deep Tracking. Proceedings - 29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016. IEEE Computer Society, 2016. pp. 4303-4311 (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition).
@inproceedings{357ee1fa6c45464eb9ebe9807a15e9b1,
title = "Hedged Deep Tracking",
abstract = "In recent years, several methods have been developed to utilize hierarchical features learned from a deep convolutional neural network (CNN) for visual tracking. However, as features from a certain CNN layer characterize an object of interest from only one aspect or one level, the performance of such trackers trained with features from one layer (usually the second to last layer) can be further improved. In this paper, we propose a novel CNN based tracking framework, which takes full advantage of features from different CNN layers and uses an adaptive Hedge method to hedge several CNN based trackers into a single stronger one. Extensive experiments on a benchmark dataset of 100 challenging image sequences demonstrate the effectiveness of the proposed algorithm compared to several state-of-theart trackers.",
author = "Yuankai Qi and Shengping Zhang and Lei Qin and Hongxun Yao and Qingming Huang and Jongwoo Lim and Yang, {Ming Hsuan}",
year = "2016",
month = "12",
day = "9",
doi = "10.1109/CVPR.2016.466",
language = "English",
series = "Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition",
publisher = "IEEE Computer Society",
pages = "4303--4311",
booktitle = "Proceedings - 29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016",
address = "United States",

}

Qi, Y, Zhang, S, Qin, L, Yao, H, Huang, Q, Lim, J & Yang, MH 2016, Hedged Deep Tracking. in Proceedings - 29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016., 7780835, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-December, IEEE Computer Society, pp. 4303-4311, 29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, United States, 16/6/26. https://doi.org/10.1109/CVPR.2016.466

Hedged Deep Tracking. / Qi, Yuankai; Zhang, Shengping; Qin, Lei; Yao, Hongxun; Huang, Qingming; Lim, Jongwoo; Yang, Ming Hsuan.

Proceedings - 29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016. IEEE Computer Society, 2016. p. 4303-4311 7780835 (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition; Vol. 2016-December).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Hedged Deep Tracking

AU - Qi, Yuankai

AU - Zhang, Shengping

AU - Qin, Lei

AU - Yao, Hongxun

AU - Huang, Qingming

AU - Lim, Jongwoo

AU - Yang, Ming Hsuan

PY - 2016/12/9

Y1 - 2016/12/9

N2 - In recent years, several methods have been developed to utilize hierarchical features learned from a deep convolutional neural network (CNN) for visual tracking. However, as features from a certain CNN layer characterize an object of interest from only one aspect or one level, the performance of such trackers trained with features from one layer (usually the second to last layer) can be further improved. In this paper, we propose a novel CNN based tracking framework, which takes full advantage of features from different CNN layers and uses an adaptive Hedge method to hedge several CNN based trackers into a single stronger one. Extensive experiments on a benchmark dataset of 100 challenging image sequences demonstrate the effectiveness of the proposed algorithm compared to several state-of-theart trackers.

AB - In recent years, several methods have been developed to utilize hierarchical features learned from a deep convolutional neural network (CNN) for visual tracking. However, as features from a certain CNN layer characterize an object of interest from only one aspect or one level, the performance of such trackers trained with features from one layer (usually the second to last layer) can be further improved. In this paper, we propose a novel CNN based tracking framework, which takes full advantage of features from different CNN layers and uses an adaptive Hedge method to hedge several CNN based trackers into a single stronger one. Extensive experiments on a benchmark dataset of 100 challenging image sequences demonstrate the effectiveness of the proposed algorithm compared to several state-of-theart trackers.

UR - http://www.scopus.com/inward/record.url?scp=84986246054&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84986246054&partnerID=8YFLogxK

U2 - 10.1109/CVPR.2016.466

DO - 10.1109/CVPR.2016.466

M3 - Conference contribution

AN - SCOPUS:84986246054

T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition

SP - 4303

EP - 4311

BT - Proceedings - 29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016

PB - IEEE Computer Society

ER -

Qi Y, Zhang S, Qin L, Yao H, Huang Q, Lim J et al. Hedged Deep Tracking. In Proceedings - 29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016. IEEE Computer Society. 2016. p. 4303-4311. 7780835. (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition). https://doi.org/10.1109/CVPR.2016.466