Abstract
In this study, we applied reinforcement learning based on the proximal policy optimization algorithm to perform motion planning for an unmanned aerial vehicle (UAV) in an open space with static obstacles. The application of reinforcement learning through a real UAV has several limitations such as time and cost; thus, we used the Gazebo simulator to train a virtual quadrotor UAV in a virtual environment. As the reinforcement learning progressed, the mean reward and goal rate of the model were increased. Furthermore, the test of the trained model shows that the UAV reaches the goal with an 81% goal rate using the simple reward function suggested in this work.
Original language | English |
---|---|
Title of host publication | 2020 20th International Conference on Control, Automation and Systems, ICCAS 2020 |
Publisher | IEEE Computer Society |
Pages | 784-787 |
Number of pages | 4 |
ISBN (Electronic) | 9788993215205 |
DOIs | |
Publication status | Published - 2020 Oct 13 |
Event | 20th International Conference on Control, Automation and Systems, ICCAS 2020 - Busan, Korea, Republic of Duration: 2020 Oct 13 → 2020 Oct 16 |
Publication series
Name | International Conference on Control, Automation and Systems |
---|---|
Volume | 2020-October |
ISSN (Print) | 1598-7833 |
Conference
Conference | 20th International Conference on Control, Automation and Systems, ICCAS 2020 |
---|---|
Country/Territory | Korea, Republic of |
City | Busan |
Period | 20/10/13 → 20/10/16 |
Bibliographical note
Funding Information:This work was supported by Electronics and Telecommunications Research Institute (ETRI) grant funded by the Korean government [20ZR1100, Core Technologies of Distributed Intelligence Things for Solving Industry and Society Problems].
Publisher Copyright:
© 2020 Institute of Control, Robotics, and Systems - ICROS.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Computer Science Applications
- Control and Systems Engineering
- Electrical and Electronic Engineering