This paper presents a novel approach for designing a robotic orthosis controller considering physical human-robot interaction (pHRI). Computer simulation for this human-robot system can be advantageous in terms of time and cost due to the laborious nature of designing a robot controller that effectively assists humans with the appropriate magnitude and phase. Therefore, we propose a two-stage policy training framework based on deep reinforcement learning (deep RL) to design a robot controller using human-robot dynamic simulation. In Stage 1, the optimal policy of generating human gaits is obtained from deep RL-based imitation learning on a healthy subject model using the musculoskeletal simulation in OpenSim-RL. In Stage 2, human models in which the right soleus muscle is weakened to a certain severity are created by modifying the human model obtained from Stage 1. A robotic orthosis is then attached to the right ankle of these models. The orthosis policy that assists walking with optimal torque is then trained on these models. Here, the elastic foundation model is used to predict the pHRI in the coupling part between the human and robotic orthosis. Comparative analysis of kinematic and kinetic simulation results with the experimental data shows that the derived human musculoskeletal model imitates a human walking. It also shows that the robotic orthosis policy obtained from two-stage policy training can assist the weakened soleus muscle. The proposed approach was validated by applying the learned policy to ankle orthosis, conducting a gait experiment, and comparing it with the simulation results.
|Number of pages||12|
|Journal||IEEE Transactions on Neural Systems and Rehabilitation Engineering|
|Publication status||Published - 2022|
Bibliographical notePublisher Copyright:
© 2001-2011 IEEE.
All Science Journal Classification (ASJC) codes
- Internal Medicine
- Biomedical Engineering