Abstract
NIR-to-VIS face recognition is identifying faces of two different domains by extracting domain-invariant features. However, this is a challenging problem due to the two different domain characteristics, and the lack of NIR face dataset. In order to reduce domain discrepancy while using the existing face recognition models, we propose a 'Relation Module' which can simply add-on to any face recognition models. The local features extracted from face image contain information of each component of the face. Based on two different domain characteristics, to use the relationships between local features is more domain-invariant than to use it as it is. In addition to these relationships, positional information such as distance from lips to chin or eye to eye, also provides domain-invariant information. In our Relation Module, Relation Layer implicitly captures relationships, and Coordinates Layer models the positional information. Also, our proposed Triplet loss with conditional margin reduces intra-class variation in training, and resulting in additional performance improvements.Different from the general face recognition models, our add-on module does not need to pre-train with the large scale dataset. The proposed module fine-tuned only with CASIA NIR-VIS 2.0 database. With the proposed module, we achieve 14.81% rank-1 accuracy and 15.47% verification rate of 0.1% FAR improvements compare to two baseline models.
Original language | English |
---|---|
Title of host publication | 2019 International Conference on Biometrics, ICB 2019 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
ISBN (Electronic) | 9781728136400 |
DOIs | |
Publication status | Published - 2019 Jun |
Event | 2019 International Conference on Biometrics, ICB 2019 - Crete, Greece Duration: 2019 Jun 4 → 2019 Jun 7 |
Publication series
Name | 2019 International Conference on Biometrics, ICB 2019 |
---|
Conference
Conference | 2019 International Conference on Biometrics, ICB 2019 |
---|---|
Country/Territory | Greece |
City | Crete |
Period | 19/6/4 → 19/6/7 |
Bibliographical note
Funding Information:tion of Korea(NRF) funded by MSIT, MOTIE, KNPA(NRF-2018M3E3A1057289) This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (2016-0-00197, Development of the high-precision natural 3D view generation technology using smart-car multi sensors and deep learning)
Funding Information:
This research was supported by Multi-Ministry Collaborative R&D Program(R&D program for complex cognitive technology) through the National Research Founda-
Funding Information:
This research was supported by Multi-Ministry Collaborative RandD Program(RandD program for complex cognitive technology) through the National Research Foundation of Korea(NRF) funded by MSIT,MOTIE, KNPA(NRF- 2018M3E3A1057289)
Publisher Copyright:
© 2019 IEEE.
All Science Journal Classification (ASJC) codes
- Computer Science Applications
- Computer Vision and Pattern Recognition
- Signal Processing
- Statistics, Probability and Uncertainty
- Demography