Reducing the cross-modality gap between two different domains is a challenging problem for heterogeneous face recognition (HFR). The current visual domain face recognition system is not easy to solve the discrepancy of cross-modality when two comparing domains are heterogeneous. Moreover, the amount of HFR dataset is significantly insufficient, making it considerable difficulty in training. This paper proposes a novel two-step framework that consists of the image translation module and the feature learning module to obtain an enhanced cross-modality matching system for heterogeneous datasets. First, the image translation module consists of a Preprocessing Chain (PC) method, CycleGAN, and the Siamese network. It enables to meet the conditions for preserving contents along with changing the styles from the source domain to the target domain. Second, in the feature learning module, the training dataset and its translated images are used together for fine-tuning the pre-trained backbone model in the visual domain. This allows for discriminative and robust feature matching of the probe and gallery test datasets in the visual domain. The experimental results are evaluated with two scenarios, using the CUHK Face Sketch FERET (CUFSF) dataset and the CASIA NIR-VIS 2.0 dataset. The proposed method achieves a better recognition performance in comparison to the state-of-the-art methods.
|Number of pages||13|
|Publication status||Published - 2020|
Bibliographical noteFunding Information:
This work was supported by R&D program for Advanced Integrated-intelligence for Identification (AIID) through the National Research Foundation of KOREA (NRF) funded by Ministry of Science and ICT (NRF-2018M3E3A1057289).
© 2013 IEEE.
All Science Journal Classification (ASJC) codes
- Computer Science(all)
- Materials Science(all)
- Electrical and Electronic Engineering