Automotive Innovation ›› 2024, Vol. 7 ›› Issue (1): 121-137.doi: 10.1007/s42154-023-00249-w
Previous Articles Next Articles
Online:
Published:
Abstract: Traffic sign detection is a crucial task for autonomous driving systems. However, the performance of deep learning-based algorithms for traffic sign detection is highly affected by the illumination conditions of scenarios. While existing algorithms demonstrate high accuracy in well-lit environments, they suffer from low accuracy in low-light scenarios. This paper proposes an end-to-end framework, LLTH-YOLOv5, specifically tailored for traffic sign detection in low-light scenarios, which enhances the input images to improve the detection performance. The proposed framework comproses two stages: the low-light enhancement stage and the object detection stage. In the low-light enhancement stage, a lightweight low-light enhancement network is designed, which uses multiple non-reference loss functions for parameter learning, and enhances the image by pixel-level adjustment of the input image with high-order curves. In the object detection stage, BIFPN is introduced to replace the PANet of YOLOv5, while designing a transformer-based detection head to improve the accuracy of small target detection. Moreover, GhostDarkNet53 is utilized based on Ghost module to replace the backbone network of YOLOv5, thereby improving the real-time performance of the model. The experimental results show that the proposed method significantly improves the accuracy of traffic sign detection in low-light scenarios, while satisfying the real-time requirements of autonomous driving.
Xiaoqiang Sun, Kuankuan Liu, Long Chen, Yingfeng Cai & Hai Wang .
0 / / Recommend
Add to citation manager EndNote|Reference Manager|ProCite|BibTeX|RefWorks
URL: http://auin.chinasaejournal.com.cn/EN/10.1007/s42154-023-00249-w
http://auin.chinasaejournal.com.cn/EN/Y2024/V7/I1/121
Cited