基于图像增强的改进YOLO11低光照车辆行人检测算法
Improved YOLO11-Based Low-Light Vehicle and Pedestrian Detection Algorithm Using Image Enhancement
DOI: 10.12677/jsta.2026.142030, PDF,   
作者: 唐 翱, 丛佩超*:广西科技大学机械与汽车工程学院,广西 柳州
关键词: 目标检测YOLO11低光照环境图像增强Object Detection YOLO11 Low-Light Environment Image Enhancement
摘要: 为提高低光照环境下车辆与行人目标的检测精度,本文提出了一种融合图像增强与改进YOLO11的兼顾检测精度与计算效率的低光照车辆行人目标检测算法。首先,引入HVI-CIDNet对原始低光照图像进行增强,以有效恢复图像的光照与结构信息。其次,在YOLO11n基础上进行了两方面改进:一是于骨干网络中引入全局边缘信息传递模块,提取边缘特征并将其有效融入主干特征表示;二是对检测头结构进行重构,并引入PConv,在降低模型参数量的同时保持检测性能。此外,采用EMASlideLoss损失函数,通过对不同难度目标分配差异化权重,缓解低光照交通场景下样本分布不均衡问题。实验结果表明,与未引入图像增强的YOLO11n相比,本文方法在精确率、召回率和mAP50上分别提升2.89%、3.03%和2.83%,且模型参数量仅小幅增加,计算复杂度与原YOLO11n基本保持一致。在所选的多种轻量级目标检测算法对比中,本文方法mAP50最高,充分验证了其在低光照环境下的检测优势,为低光照交通场景下的车辆与行人目标检测提供了一种参考。
Abstract: To improve the detection accuracy of vehicles and pedestrians under low-light conditions, this paper proposes a low-light vehicle and pedestrian detection algorithm that balances detection accuracy and computational efficiency by integrating image enhancement with an improved YOLO11 model. First, HVI-CIDNet is employed to enhance the original low-light images, effectively restoring illumination and structural information. Subsequently, two improvements are introduced based on YOLO11n. A global edge information propagation module is incorporated into the backbone network to extract edge features and effectively integrate them into the backbone feature representation. In addition, the detection head structure is redesigned and PConv is introduced, enabling a reduction in model parameters while maintaining detection performance. Furthermore, the EMASlideLoss loss function is adopted to assign differentiated weights to targets of varying difficulty, alleviating the issue of sample distribution imbalance in low-light traffic scenarios. Experimental results demonstrate that, compared with YOLO11n without image enhancement, the proposed method achieves improvements of 2.89%, 3.03%, and 2.83% in precision, recall, and mAP50, respectively, while only slightly increasing the number of model parameters and maintaining computational complexity comparable to the original YOLO11n. Among several selected lightweight object detection algorithms, the proposed method achieves the highest mAP50, fully validating its superiority in low-light environments and providing a valuable reference for vehicle and pedestrian detection in low-light traffic scenarios.
文章引用:唐翱, 丛佩超. 基于图像增强的改进YOLO11低光照车辆行人检测算法 [J]. 传感器技术与应用, 2026, 14(2): 299-309. https://doi.org/10.12677/jsta.2026.142030

参考文献

[1] Chen, L., Lin, S., Lu, X., Cao, D., Wu, H., Guo, C., et al. (2021) Deep Neural Network Based Vehicle and Pedestrian Detection for Autonomous Driving: A Survey. IEEE Transactions on Intelligent Transportation Systems, 22, 3234-3246. [Google Scholar] [CrossRef
[2] Ren, S., He, K., Girshick, R. and Sun, J. (2017) Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39, 1137-1149. [Google Scholar] [CrossRef] [PubMed]
[3] Redmon, J., Divvala, S., Girshick, R. and Farhadi, A. (2016) You Only Look Once: Unified, Real-Time Object Detection. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, 27-30 June 2016, 779-788. [Google Scholar] [CrossRef
[4] 李俊林, 张雪松, 宋存利, 等. 改进YOLOv11的低光照目标检测方法研究[J/OL]. 计算机工程与应用: 1-15.
https://link.cnki.net/urlid/11.2127.TP.20251031.1330.002, 2026-01-11.
[5] 张卓, 贾澎涛, 王斌. LL-YOLO: 一种煤矿低光环境下的人员检测算法[J]. 中国矿业, 2025, 34(11): 24-35.
[6] Yin, X., Yu, Z., Fei, Z., Lv, W. and Gao, X. (2023) PE-YOLO: Pyramid Enhancement Network for Dark Object Detection. In: Iliadis, L., Papaleonidas, A., Angelov, P. and Jayne, C., Eds., Artificial Neural Networks and Machine LearningICANN 2023, Springer, 163-174. [Google Scholar] [CrossRef
[7] 王宏伟, 刘学刚, 王浩然, 等. 基于图像增强和改进YOLOv8的煤矿低光照目标检测[J]. 工矿自动化, 2025, 51(9): 33-42, 65.
[8] 孔烜, 彭佳强, 张杰, 等. 面向低光照环境的车辆目标检测方法[J]. 湖南大学学报(自然科学版), 2025, 52(1): 187-195.
[9] Guo, C., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., et al. (2020) Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, 13-19 June 2020, 1777-1786. [Google Scholar] [CrossRef
[10] Cai, Y., Bian, H., Lin, J., Wang, H., Timofte, R. and Zhang, Y. (2023) Retinexformer: One-Stage Retinex-Based Transformer for Low-Light Image Enhancement. 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, 1-6 October 2023, 12470-12479. [Google Scholar] [CrossRef
[11] 徐广平, 徐慧英, 朱信忠, 等. 基于改进YOLOv8的低光行人检测算法[J/OL]. 计算机工程与科学: 1-11.
https://link.cnki.net/urlid/43.1258.TP.20240908.1720.002, 2026-01-11.
[12] Yan, Q., Feng, Y., Zhang, C., Pang, G., Shi, K., Wu, P., et al. (2025) HVI: A New Color Space for Low-Light Image Enhancement. 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, 10-17 June 2025, 5678-4687. [Google Scholar] [CrossRef
[13] Khanam, R. and Hussain, M. (2024) YOLOv11: An Overview of the Key Architectural Enhancements. arXiv: 2410.17725.
[14] Chen, J., Kao, S., He, H., Zhuo, W., Wen, S., Lee, C., et al. (2023) Run, Don’t Walk: Chasing Higher FLOPS for Faster Neural Networks. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, 17-24 June 2023, 12021-12031. [Google Scholar] [CrossRef
[15] Yu, F., Chen, H., Wang, X., Xian, W., Chen, Y., Liu, F., et al. (2020) BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, 13-19 June 2020, 2633-2642. [Google Scholar] [CrossRef
[16] Da, M., Shen, Y., Jiang, L., Hu, J. and Zhang, Z. (2026) Infrared Target Detection Model Based on Global Edge Feature Extraction. Infrared Physics & Technology, 153, Article ID: 106313. [Google Scholar] [CrossRef
[17] Yu, Z., Huang, H., Chen, W., Su, Y., Liu, Y. and Wang, X. (2024) YOLO-FaceV2: A Scale and Occlusion Aware Face Detector. Pattern Recognition, 155, Article ID: 110714. [Google Scholar] [CrossRef