基于改进YOLOv5s的火灾烟火检测模型
Fire and Smoke Detection Model Based on Improved YOLOv5s
摘要: 针对目前火灾烟火识别检测模型存在特征提取能力不足和检测精度不高的问题,提出了一种基于改进YOLOv5s的火灾烟火检测模型。通过融合LVC (Learnable Visual Center)模块和LRes (Light Resnet)模块来构建可学习中心残差模块,在保留输入特征信息的同时,提高对烟火边缘特征的学习能力;在YOLOv5s模型C3模块中的单个残差块内构建分层残差连接,用多尺度Res2net模块替换Bottleneck模块,增强全局特征提取能力。对大量数据集进行处理,发现相较于原始YOLOv5s模型,改进后的YOLOv5s模型对火的平均检测精度值提升了3%,全目标平均检测精度值提高了2.2%。
Abstract: In response to the current fire and smoke detection models that suffer from insufficient feature extraction capabilities and low detection accuracy, an improved YOLOv5s fire and smoke detection model is proposed. By integrating the Learnable Visual Center module and the Light Resnet module, a learnable center residual module is constructed, which enhances the learning ability of fire and smoke edge features while retaining the input feature information. A hierarchical residual connection is built within the single residual block of the YOLOv5s model’s C3 module, and the Bottleneck module is replaced with a multi-scale Res2net module to enhance the global feature extraction capability. After processing a large number of datasets, it is found that compared with the original YOLOv5s model, the improved YOLOv5s model has increased the average detection accuracy for fire by 3%, and the average detection accuracy for all targets has been improved by 2.2%.
文章引用:蔡航宇, 南峰. 基于改进YOLOv5s的火灾烟火检测模型[J]. 计算机科学与应用, 2024, 14(11): 161-169. https://doi.org/10.12677/csa.2024.1411225

参考文献

[1] 高景德, 王祥珩. 交流电机的多回路理论[J]. 清华大学学报, 1987, 27(1): 1-8.
[2] 王炜罡, 文虎, 贾勇锋. 基于FDS的高层居民楼火灾模拟[J]. 西安科技大学学报, 2020, 40(2): 314-320.
[3] Barmpoutis, P., Papaioannou, P., Dimitropoulos, K. and Grammalidis, N. (2020) A Review on Early Forest Fire Detection Systems Using Optical Remote Sensing. Sensors, 20, Article 6442. [Google Scholar] [CrossRef] [PubMed]
[4] Chen, T.-H., Wu, P.-H. and Chiou, Y.-C. (2004) An Early Fire-Detection Method Based on Image Processing. 2004 International Conference on Image Processing, Singapore, 24-27 October 2004, 1707-1710. [Google Scholar] [CrossRef
[5] Sathishkumar, V.E., Cho, J., Subramanian, M. and Naren, O.S. (2023) Forest Fire and Smoke Detection Using Deep Learning-Based Learning without Forgetting. Fire Ecology, 19, Article No. 9. [Google Scholar] [CrossRef
[6] 李睿智, 杨芳华, 张伟, 等. 改进Alexnet的小样本轴承故障诊断研究[J]. 计算机仿真, 2023, 40(12): 515-518+555.
[7] 闫新宝, 蒋正锋. 基于VGGNet深度卷积神经网络的人脸识别方法研究[J]. 电脑知识与技术, 2023, 19(25): 34-37.
[8] 陈智羽, 闵锋. 基于改进YOLO V3的接触网绝缘子检测方法[J]. 武汉工程大学学报, 2020, 42(4): 462-466.
[9] 李宏. 基于深度学习的烟火检测算法研究[D]: [硕士学位论文]. 成都: 西南交通大学, 2022.
[10] Dilshad, N. (2023) Efficient Deep Learning Framework for Fire Detection in Complex Surveillance Environment. Computer Systems Science and Engineering, 46, 749-764. [Google Scholar] [CrossRef
[11] Redmon, J., Divvala, S., Girshick, R. and Farhadi, A. (2016) You Only Look Once: Unified, Real-Time Object Detection. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, 27-30 June 2016, 779-788. [Google Scholar] [CrossRef
[12] 王晶磊. 基于改进YOLOv5的火灾检测模型研究[D]: [硕士学位论文]. 太原: 中北大学, 2023.
[13] Wu, Z., Xue, R. and Li, H. (2022) Real-Time Video Fire Detection via Modified Yolov5 Network Model. Fire Technology, 58, 2377-2403. [Google Scholar] [CrossRef
[14] 唐锐. 基于深度学习的火灾检测方法研究[D]: [硕士学位论文]. 合肥: 安徽建筑大学, 2023.
[15] Xu, C., Yu, H., Mei, L., Wang, Y., Huang, J., Du, W., et al. (2024) Rethinking Building Change Detection: Dual-Frequency Learnable Visual Encoder with Multiscale Integration Network. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 17, 6174-6188. [Google Scholar] [CrossRef
[16] He, K., Zhang, X., Ren, S. and Sun, J. (2016) Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, 27-30 June 2016, 770-778. [Google Scholar] [CrossRef
[17] 曹毅, 吴伟官, 李平, 等. 基于时空特征增强图卷积网络的骨架行为识别[J]. 电子与信息学报, 2023, 45(8): 3022-3031.
[18] Gao, S., Cheng, M., Zhao, K., Zhang, X., Yang, M. and Torr, P. (2021) Res2Net: A New Multi-Scale Backbone Architecture. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43, 652-662. [Google Scholar] [CrossRef] [PubMed]