Automated driving · Adversarial examples · Safety of the intended functionality (SOTIF) · Classification ,activation mapping (CAM) · Causality
," /> Automated driving · Adversarial examples · Safety of the intended functionality (SOTIF) · Classification ,activation mapping (CAM) · Causality
,"/> Automated driving · Adversarial examples · Safety of the intended functionality (SOTIF) · Classification ,activation mapping (CAM) · Causality
,"/> <h4> An Adversarial Attack on Salient Regions of Traffic Sign </h4>

Automotive Innovation ›› 2023, Vol. 6 ›› Issue (2): 190-203.doi: 10.1007/s42154-023-00220-9

Previous Articles     Next Articles

An Adversarial Attack on Salient Regions of Traffic Sign

Jun Yan1 · Huilin Yin1 · Bin Ye2 · Wanchen Ge1 · Hao Zhang1 · Gerhard Rigoll3 #br#   

  1. 1 School of Electronic and Information Engineering , Tongji University , Caoan Gonglu Street , Shanghai 201804 , China
    2 Ambarella Co.Ltd , Fangdian Road , Shanghai 201204 , China
    3 Institute for Human-Machine Communication , Technical University of Munich , Arcisstraße , Munich D-80333 , Germany
  • Online:2023-05-28 Published:2023-05-28

Abstract: The state-of-the-art deep neural networks are vulnerable to the attacks of adversarial examples with small-magnitude perturbations. In the field of deep-learning-based automated driving, such adversarial attack threats testify to the weakness of AI models. This limitation can lead to severe issues regarding the safety of the intended functionality (SOTIF) in automated driving. From the perspective of causality, the adversarial attacks can be regarded as confounding effects with spurious correlations established by the non-causal features. However, few previous research works are devoted to building the relationship between adversarial examples, causality, and SOTIF. This paper proposes a robust physical adversarial perturbation generation method that aims at the salient image regions of the targeted attack class with the guidance of class activation mapping (CAM). With the utilization of CAM, the maximization of the confounding effects can be achieved through the intermediate variable of the front-door criterion between images and targeted attack labels. In the simulation experiment, the proposed method achieved a 94.6% targeted attack success rate (ASR) on the released dataset when the speed-speed-limit-60 km/h (speed-limit-60) signs could be attacked as speed-speed-limit-80 km/h (speed-limit-80) signs. In the real physical experiment, the targeted ASR is 75% and the untargeted ASR is 100%. Besides the state-of-the-art attack result, a detailed experiment is implemented to evaluate the performance of the proposed method under low resolutions, diverse optimizers, and multifarious defense methods. The code and data are released at the repository: https:// github. com/ yebin 999/ rp2- with- cam.

Key words: Automated driving · Adversarial examples · Safety of the intended functionality (SOTIF) · Classification ')">Automated driving · Adversarial examples · Safety of the intended functionality (SOTIF) · Classification , activation mapping (CAM) · Causality