Dual Deep Network for Visual Tracking

Zhizhen Chi, Hongyang Li, Huchuan Lu, Ming Hsuan Yang

Research output: Contribution to journalArticle

45 Citations (Scopus)

Abstract

Visual tracking addresses the problem of identifying and localizing an unknown target in a video given the target specified by a bounding box in the first frame. In this paper, we propose a dual network to better utilize features among layers for visual tracking. It is observed that features in higher layers encode semantic context while its counterparts in lower layers are sensitive to discriminative appearance. Thus we exploit the hierarchical features in different layers of a deep model and design a dual structure to obtain better feature representation from various streams, which is rarely investigated in previous work. To highlight geometric contours of the target, we integrate the hierarchical feature maps with an edge detector as the coarse prior maps to further embed local details around the target. To leverage the robustness of our dual network, we train it with random patches measuring the similarities between the network activation and target appearance, which serves as a regularization to enforce the dual network to focus on target object. The proposed dual network is updated online in a unique manner based on the observation, that the target being tracked in consecutive frames should share more similar feature representations than those in the surrounding background. It is also found that for a target object, the prior maps can help further enhance performance by passing message into the output maps of the dual network. Therefore, an independent component analysis with reference algorithm is employed to extract target context using prior maps as guidance. Online tracking is conducted by maximizing the posterior estimate on the final maps with stochastic and periodic update. Quantitative and qualitative evaluations on two large-scale benchmark data sets show that the proposed algorithm performs favorably against the state-of-the-arts.

Original languageEnglish
Article number7857085
Pages (from-to)2005-2015
Number of pages11
JournalIEEE Transactions on Image Processing
Volume26
Issue number4
DOIs
Publication statusPublished - 2017 Apr

Fingerprint

Independent component analysis
Message passing
Chemical activation
Semantics
Detectors

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Graphics and Computer-Aided Design

Cite this

Chi, Zhizhen ; Li, Hongyang ; Lu, Huchuan ; Yang, Ming Hsuan. / Dual Deep Network for Visual Tracking. In: IEEE Transactions on Image Processing. 2017 ; Vol. 26, No. 4. pp. 2005-2015.
@article{eb6e27304a1448bb9b3c6ea587b524a6,
title = "Dual Deep Network for Visual Tracking",
abstract = "Visual tracking addresses the problem of identifying and localizing an unknown target in a video given the target specified by a bounding box in the first frame. In this paper, we propose a dual network to better utilize features among layers for visual tracking. It is observed that features in higher layers encode semantic context while its counterparts in lower layers are sensitive to discriminative appearance. Thus we exploit the hierarchical features in different layers of a deep model and design a dual structure to obtain better feature representation from various streams, which is rarely investigated in previous work. To highlight geometric contours of the target, we integrate the hierarchical feature maps with an edge detector as the coarse prior maps to further embed local details around the target. To leverage the robustness of our dual network, we train it with random patches measuring the similarities between the network activation and target appearance, which serves as a regularization to enforce the dual network to focus on target object. The proposed dual network is updated online in a unique manner based on the observation, that the target being tracked in consecutive frames should share more similar feature representations than those in the surrounding background. It is also found that for a target object, the prior maps can help further enhance performance by passing message into the output maps of the dual network. Therefore, an independent component analysis with reference algorithm is employed to extract target context using prior maps as guidance. Online tracking is conducted by maximizing the posterior estimate on the final maps with stochastic and periodic update. Quantitative and qualitative evaluations on two large-scale benchmark data sets show that the proposed algorithm performs favorably against the state-of-the-arts.",
author = "Zhizhen Chi and Hongyang Li and Huchuan Lu and Yang, {Ming Hsuan}",
year = "2017",
month = "4",
doi = "10.1109/TIP.2017.2669880",
language = "English",
volume = "26",
pages = "2005--2015",
journal = "IEEE Transactions on Image Processing",
issn = "1057-7149",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "4",

}

Dual Deep Network for Visual Tracking. / Chi, Zhizhen; Li, Hongyang; Lu, Huchuan; Yang, Ming Hsuan.

In: IEEE Transactions on Image Processing, Vol. 26, No. 4, 7857085, 04.2017, p. 2005-2015.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Dual Deep Network for Visual Tracking

AU - Chi, Zhizhen

AU - Li, Hongyang

AU - Lu, Huchuan

AU - Yang, Ming Hsuan

PY - 2017/4

Y1 - 2017/4

N2 - Visual tracking addresses the problem of identifying and localizing an unknown target in a video given the target specified by a bounding box in the first frame. In this paper, we propose a dual network to better utilize features among layers for visual tracking. It is observed that features in higher layers encode semantic context while its counterparts in lower layers are sensitive to discriminative appearance. Thus we exploit the hierarchical features in different layers of a deep model and design a dual structure to obtain better feature representation from various streams, which is rarely investigated in previous work. To highlight geometric contours of the target, we integrate the hierarchical feature maps with an edge detector as the coarse prior maps to further embed local details around the target. To leverage the robustness of our dual network, we train it with random patches measuring the similarities between the network activation and target appearance, which serves as a regularization to enforce the dual network to focus on target object. The proposed dual network is updated online in a unique manner based on the observation, that the target being tracked in consecutive frames should share more similar feature representations than those in the surrounding background. It is also found that for a target object, the prior maps can help further enhance performance by passing message into the output maps of the dual network. Therefore, an independent component analysis with reference algorithm is employed to extract target context using prior maps as guidance. Online tracking is conducted by maximizing the posterior estimate on the final maps with stochastic and periodic update. Quantitative and qualitative evaluations on two large-scale benchmark data sets show that the proposed algorithm performs favorably against the state-of-the-arts.

AB - Visual tracking addresses the problem of identifying and localizing an unknown target in a video given the target specified by a bounding box in the first frame. In this paper, we propose a dual network to better utilize features among layers for visual tracking. It is observed that features in higher layers encode semantic context while its counterparts in lower layers are sensitive to discriminative appearance. Thus we exploit the hierarchical features in different layers of a deep model and design a dual structure to obtain better feature representation from various streams, which is rarely investigated in previous work. To highlight geometric contours of the target, we integrate the hierarchical feature maps with an edge detector as the coarse prior maps to further embed local details around the target. To leverage the robustness of our dual network, we train it with random patches measuring the similarities between the network activation and target appearance, which serves as a regularization to enforce the dual network to focus on target object. The proposed dual network is updated online in a unique manner based on the observation, that the target being tracked in consecutive frames should share more similar feature representations than those in the surrounding background. It is also found that for a target object, the prior maps can help further enhance performance by passing message into the output maps of the dual network. Therefore, an independent component analysis with reference algorithm is employed to extract target context using prior maps as guidance. Online tracking is conducted by maximizing the posterior estimate on the final maps with stochastic and periodic update. Quantitative and qualitative evaluations on two large-scale benchmark data sets show that the proposed algorithm performs favorably against the state-of-the-arts.

UR - http://www.scopus.com/inward/record.url?scp=85018522105&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85018522105&partnerID=8YFLogxK

U2 - 10.1109/TIP.2017.2669880

DO - 10.1109/TIP.2017.2669880

M3 - Article

C2 - 28212087

AN - SCOPUS:85018522105

VL - 26

SP - 2005

EP - 2015

JO - IEEE Transactions on Image Processing

JF - IEEE Transactions on Image Processing

SN - 1057-7149

IS - 4

M1 - 7857085

ER -