In-air hand gesture signature (HGS) has become a new and emerging technique for dynamic signature recognition due to its advantageous touchless acquisition procedure. Unlike the conventional dynamic signature, HGS creates general images that carrying the spatial and temporal information of the signing action. Deep learning algorithms are prominent to learn these image features. However, they require tremendous amount of data to reach an optimal model which make the collection process computationally expensive. Transfer learning becomes an alternative solution for small sample size problems. This paper aims to investigate the feasibility of transfer learning in classifying a hand gesture-based signature. In our system, the hand region is detected and segmented from each depth image. Then, the salient spatial and temporal features are formed from various images. The knowledge of a pre-trained model is transferred and reused to classify the new seen image features. In this paper, we further investigate the robustness of the proposed approach against two common forgery attacks, (1) random forgeries and (2) skilled forgeries. Empirical results demonstrate that the proposed approach can achieves 99.03% of precision and 98.89% of recall in classifying HGS. On top of this, the proposed approach also manifests its robustness in handling different kinds of forgery attacks, i.e. achieving low error rates of 0.78% in random forgery attack and 4.88% in skilled forgery attack.
Bibliographical noteFunding Information:
This work was supported by the Multimedia University internal Mini Fund, Malaysia under grant No. MMUI/180180 .
© 2021 The Authors
All Science Journal Classification (ASJC) codes