Automotive Innovation ›› 2023, Vol. 6 ›› Issue (3): 453-465.doi: 10.1007/s42154-023-00235-2

• • 上一篇    下一篇

On-Ramp Merging for Highway Autonomous Driving: An Application of a New Safety Indicator in Deep Reinforcement Learning

Guofa Li1 · Weiyan Zhou2 · Siyan Lin2 · Shen Li3 · Xingda Qu2
  

  1. 1 College of Mechanical and Vehicle Engineering, Chongqing University, Chongqing 400044, China
    2 Institute of Human Factors and Ergonomics, College of Mechatronics and Control Engineering, Shenzhen University, 3688 Nanhai Avenue, Shenzhen 518060, Guangdong Province, China
    3 School of Civil Engineering, Tsinghua University, Beijing 100084, China
  • 出版日期:2023-08-21 发布日期:2023-09-21

On-Ramp Merging for Highway Autonomous Driving: An Application of a New Safety Indicator in Deep Reinforcement Learning

Guofa Li1 · Weiyan Zhou2 · Siyan Lin2 · Shen Li3 · Xingda Qu2   

  1. 1 College of Mechanical and Vehicle Engineering, Chongqing University, Chongqing 400044, China
    2 Institute of Human Factors and Ergonomics, College of Mechatronics and Control Engineering, Shenzhen University, 3688 Nanhai Avenue, Shenzhen 518060, Guangdong Province, China
    3 School of Civil Engineering, Tsinghua University, Beijing 100084, China
  • Online:2023-08-21 Published:2023-09-21

摘要: This paper proposes an improved decision-making method based on deep reinforcement learning to address on-ramp merging challenges in highway autonomous driving. A novel safety indicator, time difference to merging (TDTM), is introduced, which is used in conjunction with the classic time to collision (TTC) indicator to evaluate driving safety and assist the merging vehicle in finding a suitable gap in traffic, thereby enhancing driving safety. The training of an autonomous driving agent is performed using the Deep Deterministic Policy Gradient (DDPG) algorithm. An action-masking mechanism is deployed to prevent unsafe actions during the policy exploration phase. The proposed DDPG?+?TDTM?+?TTC solution is tested in on-ramp merging scenarios with different driving speeds in SUMO and achieves a success rate of 99.96% without significantly impacting traffic efficiency on the main road. The results demonstrate that DDPG?+?TDTM?+?TTC achieved a higher on-ramp merging success rate of 99.96% compared to DDPG?+?TTC and DDPG.

Abstract: This paper proposes an improved decision-making method based on deep reinforcement learning to address on-ramp merging challenges in highway autonomous driving. A novel safety indicator, time difference to merging (TDTM), is introduced, which is used in conjunction with the classic time to collision (TTC) indicator to evaluate driving safety and assist the merging vehicle in finding a suitable gap in traffic, thereby enhancing driving safety. The training of an autonomous driving agent is performed using the Deep Deterministic Policy Gradient (DDPG) algorithm. An action-masking mechanism is deployed to prevent unsafe actions during the policy exploration phase. The proposed DDPG?+?TDTM?+?TTC solution is tested in on-ramp merging scenarios with different driving speeds in SUMO and achieves a success rate of 99.96% without significantly impacting traffic efficiency on the main road. The results demonstrate that DDPG?+?TDTM?+?TTC achieved a higher on-ramp merging success rate of 99.96% compared to DDPG?+?TTC and DDPG.