边缘智能驱动的电商供应链联邦学习需求预测方法研究
Research on Demand Forecasting Method for Edge Intelligence-Driven Federated Learning for E-Commerce Supply Chain
摘要: 针对电商供应链中各仓储节点对本地化需求预测的迫切需求,以及传统集中式预测方法面临的数据隐私泄露和高延迟问题,本文提出了一种边缘智能驱动的联邦学习需求预测框架(EdgeFL-DP)。该框架将时序预测模型部署于各边缘仓储节点,通过联邦聚合策略实现模型协同优化,在不共享原始数据的前提下提升全局预测精度。具体而言,本文首先构建了基于LSTM和Transformer的双分支时序预测模型作为本地学习器,分别捕获短期波动特征和长期依赖关系;其次,设计了加权联邦聚合算法FedAdapt,根据各节点上报的数据质量指标和分布特征动态调整聚合权重;再次,引入差分隐私机制对模型梯度进行噪声扰动,并结合梯度Top-K稀疏化策略降低通信开销。在基于真实电商场景构建的模拟数据集上的实验表明,EdgeFL-DP的RMSE相比集中式单分支Transformer基线降低了17.8%,相比集中式单分支LSTM基线降低了23.6% (其中双分支架构贡献约6.0%,FedAdapt贡献约7.6%),同时将数据传输量减少了87.4%,通信延迟降低了65.2%。在隐私预算ε = 8.0下,借助多节点聚合的噪声平均效应,差分隐私引入的精度损失仅为3.7%。
Abstract: To address the urgent need for localized demand forecasting at warehouse nodes in e-commerce supply chains and the data privacy leakage and high latency problems faced by traditional centralized forecasting methods, this paper proposes EdgeFL-DP, an edge intelligence-driven federated learning framework for e-commerce supply chain demand forecasting. The framework deploys time-series prediction models on edge warehouse nodes and achieves collaborative model optimization through federated aggregation without sharing raw data, thereby improving global prediction accuracy. Specifically, we design a dual-branch predictor combining LSTM and Transformer architectures to capture short-term fluctuation features and long-term dependency patterns respectively, a weighted federated aggregation algorithm FedAdapt that dynamically adjusts weights based on locally reported data quality metrics and distribution characteristics, a differential privacy mechanism for gradient perturbation, and a Top-K gradient sparsification strategy to reduce communication overhead. Experiments on simulated datasets based on real e-commerce scenarios demonstrate that EdgeFL-DP reduces RMSE by 17.8% compared to the centralized single-branch Transformer baseline and by 23.6% compared to Centralized-LSTM, with the dual-branch architecture contributing approximately 6.0% and FedAdapt contributing approximately 7.6% of the improvement. Data transmission is decreased by 87.4% and communication latency by 65.2%. Under a privacy budget of ε = 8.0, the accuracy loss introduced by differential privacy is only 3.7%, benefiting from the noise averaging effect across multiple participating nodes.
文章引用:赵伽, 李少波. 边缘智能驱动的电商供应链联邦学习需求预测方法研究[J]. 电子商务评论, 2026, 15(4): 801-813. https://doi.org/10.12677/ecl.2026.154457

参考文献

[1] Statista (2024) Retail E-Commerce Sales Worldwide from 2022 to 2028.
https://www.statista.com/statistics/379046/worldwide-retail-e-commerce-sales/
[2] 施巍松, 张星洲, 王一帆, 张庆阳. 边缘计算: 现状与展望[J]. 计算机研究与发展, 2019, 56(1): 69-89.
[3] McMahan, B., Moore, E., Ramage, D., et al. (2017) Communication-Efficient Learning of Deep Networks from Decentralized Data. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS), Fort Lauderdale, 20-22 April 2017, 1273-1282.
[4] 李杰, 王玉霞, 赵旭东. 电商企业商品销量的预测方法[J]. 统计与决策, 2018, 34(22): 176-179.
[5] 李建斌, 雷鸣颢, 戴宾, 蔡学媛. 考虑促销因素的医药电商平台需求预测研究[J]. 中国管理科学, 2022, 30(12): 120-130.
[6] 何喜军, 马珊, 武玉英, 蒋国瑞. 小样本下多维指标融合的电商产品销量预测[J]. 计算机工程与应用, 2019, 55(15): 177-184.
[7] 王雪蓉, 万年红. 基于跨境电商可控关联性大数据的出口产品销量动态预测模型[J]. 计算机应用, 2017, 37(4): 1038-1043, 1050.
[8] 程开明, 刘书成, 雷洛, 陈晓颖. 基于网络搜索数据和深度神经网络的社会消费品零售总额预测研究[J]. 运筹与管理, 2024, 33(12): 203-209.
[9] 梁宏涛, 刘硕, 杜军威, 胡强, 于旭. 深度学习应用于时序预测研究综述[J]. 计算机科学与探索, 2023, 17(6): 1285-1300.
[10] Hochreiter, S. and Schmidhuber, J. (1997) Long Short-Term Memory. Neural Computation, 9, 1735-1780. [Google Scholar] [CrossRef] [PubMed]
[11] Vaswani, A., Shazeer, N., Parmar, N., et al. (2017) Attention Is All You Need. Advances in Neural Information Processing Systems, 30, 5998-6008.
[12] Zhou, H., Zhang, S., Peng, J., Zhang, S., Li, J., Xiong, H., et al. (2021) Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. Proceedings of the AAAI Conference on Artificial Intelligence, 35, 11106-11115. [Google Scholar] [CrossRef
[13] Salinas, D., Flunkert, V., Gasthaus, J. and Januschowski, T. (2020) DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks. International Journal of Forecasting, 36, 1181-1191. [Google Scholar] [CrossRef
[14] Rangapuram, S.S., Seeger, M.W., Gasthaus, J., et al. (2018) Deep State Space Models for Time Series Forecasting. Proceedings of the 32nd International Conference on Neural Information Processing Systems, Montréal, 3-8 December 2018, 7796-7805.
[15] 周传鑫, 孙奕, 汪德刚, 葛桦玮. 联邦学习研究综述[J]. 网络与信息安全学报, 2021, 7(5): 77-92.
[16] Li, T., Sahu, A.K., Zaheer, M., et al. (2020) Federated Optimization in Heterogeneous Networks. Proceedings of Machine Learning and Systems, 2, 429-450.
[17] Li, Q., He, B. and Song, D. (2021) Model-Contrastive Federated Learning. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, 20-25 June 2021, 10713-10722. [Google Scholar] [CrossRef
[18] Li, X., Jiang, M., Zhang, X., et al. (2021) FedBN: Federated Learning on Non-IID Features via Local Batch Normalization. Proceedings of the 9th International Conference on Learning Representations, Virtual Event, 3-7 May 2021.
[19] Karimireddy, S.P., Kale, S., Mohri, M., et al. (2020) SCAFFOLD: Stochastic Controlled Averaging for Federated Learning. Proceedings of the 37th International Conference on Machine Learning, Virtual Event, 13-18 July 2020, 5132-5143.
[20] Wang, J., Liu, Q., Liang, H., et al. (2020) Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization. Advances in Neural Information Processing Systems, 33, 7611-7623.
[21] Dwork, C. and Roth, A. (2014) The Algorithmic Foundations of Differential Privacy. Foundations and Trends® in Theoretical Computer Science, 9, 211-487. [Google Scholar] [CrossRef
[22] Abadi, M., Chu, A., Goodfellow, I., McMahan, H.B., Mironov, I., Talwar, K., et al. (2016) Deep Learning with Differential Privacy. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, 24-28 October 2016, 308-318. [Google Scholar] [CrossRef
[23] 李敏, 肖迪, 陈律君. 兼顾通信效率与效用的自适应高斯差分隐私个性化联邦学习[J]. 计算机学报, 2024, 47(4): 924-946.
[24] Jayaraman, B. and Evans, D. (2019) Evaluating Differentially Private Machine Learning in Practice. Proceedings of the 28th USENIX Conference on Security Symposium, Santa Clara, 14-16 August 2019, 1895-1912.
[25] Apple Differential Privacy Team (2017) Learning with Privacy at Scale.
https://machinelearning.apple.com/research/learning-with-privacy-at-scale
[26] Apple (2017) Differential Privacy Overview.
https://www.apple.com/privacy/docs/Differential_Privacy_Overview.pdf