Virtual training has received a considerable amount of research attention in recent years due to its potential for use in a variety of applications, such as virtual military training, virtual emergency evacuation, and virtual firefighting. To provide a trainee with an interactive training environment, human action recognition methods have been introduced as a major component of virtual training simulators. Wearable motion capture suit-based human action recognition has been widely used for virtual training, although it may distract the trainee. In this paper, we present a virtual training simulator based on 360° multi-view human action recognition using multiple Kinect sensors that provides an immersive environment for the trainee without the need to wear devices. To this end, the proposed simulator contains coordinate system transformation, front-view Kinect sensor tracking, multi-skeleton fusion, skeleton normalization, orientation compensation, feature extraction, and classifier modules. Virtual military training is presented as a potential application of the proposed simulator. To train and test it, a database consisting of 25 military training actions was constructed. In the test, the proposed simulator provided an excellent, natural training environment in terms of frame-by-frame classification accuracy, action-by-action classification accuracy, and observational latency.
Bibliographical notePublisher Copyright:
© 2013 IEEE.
All Science Journal Classification (ASJC) codes
- Computer Science(all)
- Materials Science(all)