Developing a user interface (UI) suitable for headset environments is one of the challenges in the field of augmented reality (AR) technologies. This study proposes a hands-free UI for an AR headset that exploits facial gestures of the wearer to recognize user intentions. The facial gestures of the headset wearer are detected by a custom-designed sensor that detects skin deformation based on infrared diffusion characteristics of human skin. We designed a deep neural network classifier to determine the user’s intended gestures from skin-deformation data, which are exploited as user input commands for the proposed UI system. The proposed classifier is composed of a spatiotemporal autoencoder and deep embedded clustering algorithm, trained in an unsupervised manner. The UI device was embedded in a commercial AR headset, and several experiments were performed on the online sensor data to verify operation of the device. We achieved implementation of a hands-free UI for an AR headset with average accuracy of 95.4% user-command recognition, as determined through tests by participants.
Bibliographical noteFunding Information:
Acknowledgments: The authors thank Ashutosh Mishra for valuable recommendations as to the future direction of the research during the discussion for preparing the rebuttal. The authors performed this work as a part of the ICT Consilience Creative Program (IITP-2019-2017-0-01015) supported by IITP.
Funding: This work was supported by the Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No.2017-0-00244, HMD Facial Expression Recognition Sensor and Cyber-Interaction Interface Technology).
© 2019 by the authors. Licensee MDPI, Basel, Switzerland.
All Science Journal Classification (ASJC) codes
- Analytical Chemistry
- Atomic and Molecular Physics, and Optics
- Electrical and Electronic Engineering