Unexpected collision avoidance driving strategy using deep reinforcement learning

Myounghoe Kim, Seongwon Lee, Jaehyun Lim, Jongeun Choi, Seong Gu Kang

Research output: Contribution to journalArticle

Abstract

In this paper, we generated intelligent self-driving policies that minimize the injury severity in unexpected traffic signal violation scenarios at an intersection using the deep reinforcement learning. We provided guidance on reward engineering in terms of the multiplicity of objective function. We used a deep deterministic policy gradient method in the simulated environment to train self-driving agents. We designed two agents, one with a single-objective reward function of collision avoidance and the other with a multi-objective reward function of both collision avoidance and goal-approaching. We evaluated their performances by comparing the percentages of collision avoidance and the average injury severity against those of human drivers and an autonomous emergency braking (AEB) system. The percentage of collision avoidance of our agents were 78.89% higher than human drivers and 84.70% higher than the AEB system. The average injury severity score of our agents were only 8.92% of human drivers and 6.25% of the AEB system.

Original languageEnglish
Article number8961990
Pages (from-to)17243-17252
Number of pages10
JournalIEEE Access
Volume8
DOIs
Publication statusPublished - 2020 Jan 1

All Science Journal Classification (ASJC) codes

  • Computer Science(all)
  • Materials Science(all)
  • Engineering(all)

Fingerprint Dive into the research topics of 'Unexpected collision avoidance driving strategy using deep reinforcement learning'. Together they form a unique fingerprint.

  • Cite this