基于模型融合的电影推荐系统研究
Research on Movie Recommendation System Based on Model Fusion
摘要: 随着数字内容的爆炸式增长,传统推荐模型面临数据稀疏性、冷启动问题及复杂用户兴趣建模的挑战。传统矩阵分解方法虽能通过隐式反馈增强偏好捕捉,但难以挖掘高阶关联;而常规图神经网络虽能建模全局关系,却存在计算复杂度高与过平滑等瓶颈。为此,本研究提出一种创新的双分支混合推荐模型,核心在于深度融合了改进的协同过滤模型(T-SVD++)与异构图注意力网络(HGAT)。在模型架构上,本研究通过整合用户行为、电影属性及社交关系构建异构图。其中,T-SVD++分支致力于学习用户与电影的潜在因子,精准捕捉显式与隐式的局部交互偏好;HGAT分支则利用高度灵活的注意力机制聚合邻域信息,深入挖掘图结构中的高阶关联与用户动态兴趣。随后,两分支提取的深层特征经动态加权融合生成推荐结果。在MovieLens数据集上的实验表明,基于T-SVD++与HGAT的混合模型在精确率、召回率、F1分数及RMSE等指标上显著优于基线方法,尤其在冷启动场景下展现出极强的鲁棒性与显著提升的召回率。本研究验证了该创新架构在推荐系统中的卓越有效性,为电影推荐实践提供了全新解决方案,未来将进一步拓展其在实时推荐与多模态数据中的应用。
Abstract: With the explosive growth of digital content, traditional recommendation models face challenges such as data sparsity, cold start issues, and complex user interest modeling. Although traditional matrix factorization methods can enhance preference capture through implicit feedback, they struggle to mine high-order associations. On the other hand, conventional graph neural networks can model global relationships, but they suffer from bottlenecks such as high computational complexity and oversmoothing. To address these issues, this study proposes an innovative dual-branch hybrid recommendation model, which deeply integrates an improved collaborative filtering model (T-SVD++) with a heterogeneous graph attention network (HGAT). In terms of model architecture, this study constructs a heterogeneous graph by integrating user behavior, movie attributes, and social relationships. The T-SVD++ branch is dedicated to learning the latent factors of users and movies, accurately capturing explicit and implicit local interaction preferences. The HGAT branch utilizes a highly flexible attention mechanism to aggregate neighborhood information and deeply explore high-order associations and dynamic user interests within the graph structure. Subsequently, the deep features extracted by the two branches are dynamically weighted and fused to generate recommendation results. Experiments on the MovieLens dataset show that the hybrid model based on T-SVD++ and HGAT significantly outperforms the baseline method in terms of precision, recall, F1 score, and RMSE, especially demonstrating strong robustness and significantly improved recall in cold-start scenarios. This study verifies the excellent effectiveness of this innovative architecture in recommendation systems, provides a new solution for movie recommendation practice, and will further expand its application in real-time recommendation and multimodal data in the future.
参考文献
|
[1]
|
Zhang, X., Ding, X. and Ma, L. (2022) The Influences of Information Overload and Social Overload on Intention to Switch in Social Media. Behaviour & Information Technology, 41, 228-241. [Google Scholar] [CrossRef]
|
|
[2]
|
Donkers, T., Loepp, B. and Ziegler, J. (2017) Sequential User-Based Recurrent Neural Network Recommendations. Proceedings of the Eleventh ACM Conference on Recommender Systems, Como, 27-31 August 2017, 152-160. [Google Scholar] [CrossRef]
|
|
[3]
|
Li, G.J., et al. (2020) LSTM-Based Argument Recommendation for Non-API Methods. Science China Information Sciences, 63, Article No. 190101.
|
|
[4]
|
杨博, 赵鹏飞. 推荐算法综述[J]. 山西大学学报(自然科学版), 2011, 34(3): 337-350.
|
|
[5]
|
Chen, Y., Hu, J., Xiao, Y., Li, X. and Hui, P. (2020) Understanding the User Behavior of Foursquare: A Data-Driven Study on a Global Scale. IEEE Transactions on Computational Social Systems, 7, 1019-1032. [Google Scholar] [CrossRef]
|
|
[6]
|
Guo, J., Han, K., Tang, Y., Wang, Y. and Wu, E. (2022) Vision GNN: An Image Is Worth Graph of Nodes. Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, 28 November-9 December 2022, 8291-8303. [Google Scholar] [CrossRef]
|
|
[7]
|
Weiss, K., Khoshgoftaar, T.M. and Wang, D. (2016) A Survey of Transfer Learning. Journal of Big Data, 3, 1-40. [Google Scholar] [CrossRef]
|
|
[8]
|
Gao, Y., Zhang, Z., Lin, H., et al. (2020) Hypergraph Learning: Methods and Practices. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44, 2548-2566.
|