Person reidentification is of great importance in visual surveillance and multiperson tracking across multiple camera views. Two fundamental problems are critical for person reidentification: 1) how to account for appearance variation or feature transformation caused by viewpoint changes and 2) how to learn a discriminative distance metric for reidentification. In this paper, we propose an algorithm in which both feature transformation and metric learning are exploited and jointly optimized. We learn local models from subsets of training samples with regularization imposed by the global model which is trained among the entire data set. The learned local models enhance the discriminative strength and generalization ability. Experimental results on the Viewpoint Invariant PEdestrian Eecognition, Queen Mary University of London ground reidentification, CUHK01, and CUHK03 benchmark data sets show that the proposed sample-specific view-invariant approach performs favorably against the state-of-The-Art person reidentification methods.
|Number of pages||11|
|Journal||IEEE Transactions on Neural Networks and Learning Systems|
|Publication status||Published - 2019 Oct|
Bibliographical noteFunding Information:
Manuscript received July 30, 2017; revised March 28, 2018 and August 5, 2018; accepted December 20, 2018. Date of publication March 12, 2019; date of current version September 18, 2019. This work was supported by the Natural Science Foundation of China under Grant 61725202, Grant 61829102, Grant 91538201, and Grant 61751212. (Corresponding author: Huchuan Lu.) Z. Liu and H. Lu are with the School of Information and Communication Engineering, Dalian University of Technology, Dalian 116023, China (e-mail: email@example.com; firstname.lastname@example.org).
© 2012 IEEE.
All Science Journal Classification (ASJC) codes
- Computer Science Applications
- Computer Networks and Communications
- Artificial Intelligence