一种基于多维度自注意力机制的轻量级车道线检测算法
A Lightweight Lane Line Detection Algorithm Based on Multi-Dimensional Self-Attention Mechanism
摘要: 车道线检测是自动驾驶领域中一个基础的任务。现在最先进的方法主要将车道线检测视为逐像素分割的问题。作为一个密集预测任务,车道线检测需要较大的计算量,因此预测速度较为缓慢。但是作为自动驾驶的子任务,具备实时性是一个重要的要求。所以我们提出了一个极为轻量化的侧导向检测模型,来实现实时的车道线检测。而且针对卷积网络在特征提取过程中所造成的特征之间的依赖减弱问题,我们引入了双维度的自注意力机制来解决这个问题。通过在现有的车道线检测基准数据集上进行的大量实验表明,我们的方法可以在速度和准确性方面达到具备竞争性的性能。
Abstract: Lane line detection is a basic task in the field of autonomous driving. Current state-of-the-art methods mainly treat lane detection as a problem of pixel-by-pixel segmentation. As an intensive prediction task, lane line detection requires a large amount of calculation. So the speed of prediction is relatively slow. But as a subtask of autonomous driving, the real-time is an important requirement. Therefore, we propose an extremely lightweight side guide detection model to achieve real-time lane line detection. And for the problem of reduced dependence between features caused by the convolutional network in the feature extraction process, we introduce a two-dimensional self-attention mechanism to solve this problem. A large number of experiments on existing lane detection benchmark data sets show that our method can achieve competitive performance in terms of speed and accuracy.
文章引用:崔建东, 崔岩. 一种基于多维度自注意力机制的轻量级车道线检测算法[J]. 计算机科学与应用, 2022, 12(1): 108-113. https://doi.org/10.12677/CSA.2022.121012

参考文献

[1] Pan, X., Shi, J., Luo, P., Wang, X. and Tang, X. (2018) Spatial as Deep: Spatial CNN for Traffic Scene Understanding. Proceedings of the AAAI Conference on Artificial Intelligence, 7276-7283.
[2] 叶飞, 刘子龙. 一种采用区域知识蒸馏网络的车道线检测算法[J/OL]. 小型微型计算机系统: 1-7. http://kns.cnki.net/kcms/detail/21.1106.tp.20211019.1525.004.html, 2021-11-04.
[3] 陈正斌, 叶东毅. 带语义分割的轻量化车道线检测算法[J]. 小型微型计算机系统, 2021, 42(9): 1877-1883.
[4] Lee, S., Kim, J., Shin Yoon, J., Shin, S., Bailo, O., Kim, N., Lee, T.H., Seok Hong, H., Han, S.H. and So Kweon, I. (2017) Vpgnet: Vanishing Point Guided Network for Lane and Road Marking Detection and Recognition. Proceedings of the IEEE International Con-ference on Computer Vision, Venice, 22-29 October 2017, 1965-1973. [Google Scholar] [CrossRef
[5] Philion, J. (2019) FastDraw: Addressing the Long Tail of Lane De-tection by Adapting a Sequential Prediction Network. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, 15-20 June 2019, 11582-11591. [Google Scholar] [CrossRef
[6] Hsu, Y.C., Xu, Z., Kira, Z. and Huang, J. (2018) Learning to Cluster for Proposal-Free Instance Segmentation. Proceedings of the International Joint Conference on Neural Networks, Rio de Janeiro, 8-13 July 2018, 1-8. [Google Scholar] [CrossRef
[7] 赵泽威, 杨雪银, 员富强. 基于自注意力机制的车道线检测算法[J]. 长江信息通信, 2021, 34(9): 24-27.
[8] He, K.M., Zhang, X.Y., Ren, S.Q. and Sun, J. (2016) Deep Re-sidual Learning for Image Recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recog-nition, Las Vegas, NV, 27-30 June 2016, 770-778. [Google Scholar] [CrossRef
[9] Fu, J., Liu, J., Tian, H.J., Li, Y., Bao, Y.J., Fang, Z.W. and Lu, H.Q. (2019) Dual Attention Network for Scene Segmentation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, 15-20 June 2019, 3146-3154. [Google Scholar] [CrossRef
[10] Qin, Z.Q., Wang, H.Y. and Li, X. (2020) Ultra Fast Struc-ture-Aware Deep Lane Detection. Computer Vision-ECCV 2020: 16th European Conference, Glasgow, 23-28 August 2020, 276-291. [Google Scholar] [CrossRef
[11] Kingma, D.P. and Ba, J. (2014) Adam: A Method for Sto-chastic Optimization. arXiv preprint arXiv:1412.6980
[12] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., et al. (2019) Pytorch: An Imperative Style, High-Performance Deep Learning Library. Advances in Neural Information Processing Systems, 32, 8026-8037.
[13] Hou, Y., Ma, Z., Liu, C. and Loy, C.C. (2019) Learning Lightweight Lane Detection CNNS by Self Attention Distillation. Proceedings of the IEEE International Conference on Computer Vision, Seoul, 27 October-2 November 2019, 1013-1021. [Google Scholar] [CrossRef