CE-FedAvg方案抗模型投毒攻击研究
Research on the Resistance of the CE-FedAvg Scheme to Model Poisoning Attacks
DOI: 10.12677/mos.2025.145390, PDF,   
作者: 王 双:上海理工大学光电信息与计算机工程学院,上海;刘 亚:上海理工大学光电信息与计算机工程学院,上海;香港狮子山网络空间安全实验室,香港;赵逢禹:上海出版印刷高等专科学校信息与智能工程系,上海;曲 博:港专学院网络空间科技学院,香港
关键词: 联邦学习投毒攻击模型投毒攻击模型压缩Federated Learning Poisoning Attack Model Poisoning Attack Model Compression
摘要: 物联网设备的广泛部署带来了数据量的爆炸式增长,联邦学习不仅可以将分散的物联网设备上的数据协同利用,也可以保护数据的安全。但物联网环境中设备众多、资源有限,模型训练效率成为难题。量化压缩技术虽然通过降低传输精度来减少通信成本,但可能存在着投毒攻击风险。文章研究了量化联邦学习CE-FedAvg抵抗模型投毒攻击的能力。具体来说,通过设置不同恶意客户端占比,对CE-FedAvg选择了无目标攻击、缩放攻击、分布式后门攻击以及微量攻击,并在MNIST和CIFAR-10数据集上进行实验。实验结果表明:模型投毒攻击会导致CE-FedAvg精准率、F1值、召回率等指标大幅下降,最后提出了一种利用模型更新的一致性来检测恶意客户端的防御策略,可有效识别联邦学习中的恶意客户端。
Abstract: The widespread deployment of Internet of Things (IoT) devices has led to an explosive growth in data volume. Federated learning (FL) not only enables collaborative utilization of decentralized data from IoT devices but also enhances data security. However, in IoT environments with numerous devices and limited resources, model training efficiency becomes a critical challenge. Although quantization compression techniques reduce communication costs by lowering transmission precision, they may introduce risks of poisoning attacks. This paper investigates the resilience of the quantized federated learning method CE-FedAvg against model poisoning attacks. Specifically, by varying the proportion of malicious clients, CE-FedAvg is subjected to untargeted model poisoning attacks, scaling attacks, distributed backdoor attacks, and a little is enough attack. Experiments conducted on the MNIST and CIFAR-10 datasets demonstrate that model poisoning attacks significantly degrade CE-FedAvg’s performance metrics, including accuracy, F1 score, and recall. Finally, a defense strategy leveraging the consistency of model updates to detect malicious clients is proposed, which effectively identifies adversarial participants in federated learning.
文章引用:王双, 刘亚, 赵逢禹, 曲博. CE-FedAvg方案抗模型投毒攻击研究[J]. 建模与仿真, 2025, 14(5): 246-258. https://doi.org/10.12677/mos.2025.145390

参考文献

[1] McMahan, B., Moore, E., Ramage, D., et al. (2017) Communication-Efficient Learning of Deep Networks from Decentralized Data. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS), Fort Lauderdale, 20-22 April 2017, 1273-1282.
[2] Yang, Q., Liu, Y., Chen, T. and Tong, Y. (2019) Federated Machine Learning. ACM Transactions on Intelligent Systems and Technology, 10, 1-19. [Google Scholar] [CrossRef
[3] Mills, J., Hu, J. and Min, G. (2020) Communication-Efficient Federated Learning for Wireless Edge Intelligence in IoT. IEEE Internet of Things Journal, 7, 5986-5994. [Google Scholar] [CrossRef
[4] 肖雄, 唐卓, 肖斌, 等. 联邦学习的隐私保护与安全防御研究综述[J]. 计算机学报, 2023, 46(5): 1019-1044.
[5] 赵亚茹, 张建标, 曹益皓, 等. 云边联邦学习系统下抗投毒攻击的防御方法[J/OL]. 软件学报, 1-21. 2025-03-15.[CrossRef
[6] Cao, X. and Gong, N.Z. (2022) MPAF: Model Poisoning Attacks to Federated Learning Based on Fake Clients. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), New Orleans, 19-20 June 2022, 3395-3403. [Google Scholar] [CrossRef
[7] Bhagoji, A.N., Chakraborty, S., Mittal, P., et al. (2019) Analyzing Federated Learning through an Adversarial Lens. 2019 International Conference on Machine Learning, Long Beach, 10-15 June 2019, 634-643.
[8] Fang, M., Cao, X., Jia, J., et al. (2020) Local Model Poisoning Attacks to {Byzantine-Robust} Federated Learning. 2020 29th USENIX Security Symposium (USENIX Security 20), Boston, 12-14 August 2020, 1605-1622.
[9] Bagdasaryan, E., Veit, A., Hua, Y., et al. (2020) How to Backdoor Federated Learning. 2020 International Conference on Artificial Intelligence and Statistics, Online, 26-28 August 2020, 2938-2948.
[10] Xie, C., Huang, K., Chen, P.Y., et al. (2019) DBA: Distributed Backdoor Attacks against Federated Learning. 2019 International Conference on Learning Representations, New Orleans, 6-9 May 2019.
[11] Baruch, G., Baruch, M. and Goldberg, Y. (2019) A Little Is Enough: Circumventing Defenses for Distributed Learning. Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, 8 December 2019, 8635-8645.
[12] Zhang, D., Yang, J., Ye, D. and Hua, G. (2018) LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks. In: Lecture Notes in Computer Science, Springer, 373-390. [Google Scholar] [CrossRef
[13] Banner, R., Nahshan, Y. and Soudry, D. (2019) Post Training 4-Bit Quan‐Tization of Convolutional Networks for Rapid Deployment. Advances in Neural Information Processing Systems, 32, 7948-7956.
[14] 陈晋音, 曹志骐, 郑海斌, 等. 面向模型量化的安全性研究综述[J/OL]. 小型微型计算机系统, 1-23.
http://kns.cnki.net/kcms/detail/21.1106.tp.20250117.1440.018.html, 2025-03-15.