Autonomous vehicle · Camera sensor · Deep learning · Depth estimation · Self-supervised
," /> Autonomous vehicle · Camera sensor · Deep learning · Depth estimation · Self-supervised
,"/> Autonomous vehicle · Camera sensor · Deep learning · Depth estimation · Self-supervised
,"/> <h4> Depth Estimation Based on Monocular Camera Sensors in Autonomous Vehicles: A Self-supervised Learning Approach </h4>

Automotive Innovation ›› 2023, Vol. 6 ›› Issue (2): 268-280.doi: 10.1007/s42154-023-00223-6

• • 上一篇    下一篇

Depth Estimation Based on Monocular Camera Sensors in Autonomous Vehicles: A Self-supervised Learning Approach

Guofa Li1,2 · Xingyu Chi2 · Xingda Qu2
  

  1. 1 College of Mechanical and Vehicle Engineering , Chongqing University , Chongqing 400044 , China
    2 Institute of Human Factors and Ergonomics, College of Mechatronics and Control Engineering , Shenzhen University , 3688 Nanhai Avenue , Shenzhen 518060 , China
  • 出版日期:2023-05-28 发布日期:2023-05-28

Depth Estimation Based on Monocular Camera Sensors in Autonomous Vehicles: A Self-supervised Learning Approach

Guofa Li1,2 · Xingyu Chi2 · Xingda Qu2 #br#   

  1. College of Mechanical and Vehicle Engineering , Chongqing University , Chongqing 400044 , China
    Institute of Human Factors and Ergonomics, College of Mechatronics and Control Engineering , Shenzhen University , 3688 Nanhai Avenue , Shenzhen 518060 , China
  • Online:2023-05-28 Published:2023-05-28

摘要: Estimating depth from images captured by camera sensors is crucial for the advancement of autonomous driving technologies and has gained significant attention in recent years. However, most previous methods rely on stacked pooling or stride convolution to extract high-level features, which can limit network performance and lead to information redundancy. This paper proposes an improved bidirectional feature pyramid module (BiFPN) and a channel attention module (Seblock: squeeze and excitation) to address these issues in existing methods based on monocular camera sensor. The Seblock redistributes channel feature weights to enhance useful information, while the improved BiFPN facilitates efficient fusion of multi-scale features. The proposed method is in an end-to-end solution without any additional post-processing, resulting in efficient depth estimation. Experiment results show that the proposed method is competitive with state-of-the-art algorithms and preserves fine-grained texture of scene depth.

关键词: Autonomous vehicle · Camera sensor · Deep learning · Depth estimation · Self-supervised
')">Autonomous vehicle · Camera sensor · Deep learning · Depth estimation · Self-supervised

Abstract: Estimating depth from images captured by camera sensors is crucial for the advancement of autonomous driving technologies and has gained significant attention in recent years. However, most previous methods rely on stacked pooling or stride convolution to extract high-level features, which can limit network performance and lead to information redundancy. This paper proposes an improved bidirectional feature pyramid module (BiFPN) and a channel attention module (Seblock: squeeze and excitation) to address these issues in existing methods based on monocular camera sensor. The Seblock redistributes channel feature weights to enhance useful information, while the improved BiFPN facilitates efficient fusion of multi-scale features. The proposed method is in an end-to-end solution without any additional post-processing, resulting in efficient depth estimation. Experiment results show that the proposed method is competitive with state-of-the-art algorithms and preserves fine-grained texture of scene depth.

Key words: Autonomous vehicle · Camera sensor · Deep learning · Depth estimation · Self-supervised
')">Autonomous vehicle · Camera sensor · Deep learning · Depth estimation · Self-supervised