In a virtual reality (VR) environment, where visual stimuli predominate over other stimuli, the user experiences cybersickness because the balance of the body collapses due to self-motion. Accordingly, the VR experience is accompanied by unavoidable sickness referred to as visually induced motion sickness (VIMS). In this article, our primary purpose is to simultaneously estimate the VIMS score by referring to the content and calculate the temporally induced VIMS sensitivity. To seek our goals, we propose a novel architecture composed of two consecutive networks: 1) neurological representation and 2) spatiotemporal representation. In the first stage, the network imitates and learns the neurological mechanism of motion sickness. In the second stage, the significant feature of the spatial and temporal domains is expressed over the generated frames. After the training procedure, our model can calculate VIMS sensitivity for each frame of the VR content by using the weakly supervised approach for unannotated temporal VIMS scores. Furthermore, we release a massive VR content database. In the experiments, the proposed framework demonstrates excellent performance for VIMS score prediction compared with existing methods, including feature engineering and deep learning-based approaches. Furthermore, we propose a way to visualize the cognitive response to visual stimuli and demonstrate that the induced sickness tends to be activated in a similar tendency, as done in clinical studies.
|Number of pages||13|
|Journal||IEEE Transactions on Neural Networks and Learning Systems|
|Publication status||Published - 2022 Feb 1|
Bibliographical notePublisher Copyright:
© 2012 IEEE.
All Science Journal Classification (ASJC) codes
- Computer Science Applications
- Computer Networks and Communications
- Artificial Intelligence