Autonomous vehicle · Camera sensor · Deep learning · Depth estimation · Self-supervised
," />
Autonomous vehicle · Camera sensor · Deep learning · Depth estimation · Self-supervised
,"/>
Autonomous vehicle · Camera sensor · Deep learning · Depth estimation · Self-supervised
,"/>
Automotive Innovation ›› 2023, Vol. 6 ›› Issue (2): 268-280.doi: 10.1007/s42154-023-00223-6
Guofa Li1,2 · Xingyu Chi2 · Xingda Qu2
Guofa Li1,2 · Xingyu Chi2 · Xingda Qu2 #br#
摘要: Estimating depth from images captured by camera sensors is crucial for the advancement of autonomous driving technologies and has gained significant attention in recent years. However, most previous methods rely on stacked pooling or stride convolution to extract high-level features, which can limit network performance and lead to information redundancy. This paper proposes an improved bidirectional feature pyramid module (BiFPN) and a channel attention module (Seblock: squeeze and excitation) to address these issues in existing methods based on monocular camera sensor. The Seblock redistributes channel feature weights to enhance useful information, while the improved BiFPN facilitates efficient fusion of multi-scale features. The proposed method is in an end-to-end solution without any additional post-processing, resulting in efficient depth estimation. Experiment results show that the proposed method is competitive with state-of-the-art algorithms and preserves fine-grained texture of scene depth.