可解释推荐的混合方法:结合合成坐标、Kolmogorov-Arnold Network和Transformer
A Hybrid Approach to Explainable Recommendation: Combining Synthetic Coordinate, Kolmogorov-Arnold Network and Transformer
DOI: 10.12677/mos.2025.145370, PDF,    国家自然科学基金支持
作者: 艾 均, 徐戈辉, 苏 湛, 苏 杭:上海理工大学光电信息与计算机工程学院,上海
关键词: 可解释推荐合成坐标Kolmogorov-Arnold NetworkTransformerExplainable Recommendation Synthetic Coordinates Kolmogorov–Arnold Network Transformer
摘要: 近年来,可解释推荐因其不仅能提供个性化推荐,还能解释推荐原因而备受关注。基于Transformer的文本生成技术尽管可以生成高度自然的推荐解释,但其高模型复杂性和计算成本也带来了挑战。为应对这些问题,本文提出了一种基于Transformer架构的混合可解释推荐模型。该模型结合了合成坐标和Kolmogorov-Arnold网络(KAN),通过将用户和物品映射至低维嵌入空间,利用空间关系引导解释生成,实现了高效的评分预测和解释生成。实验结果表明,该模型在评分预测和解释生成任务中均达到了最先进的水平,且在BLEU-4和USR等关键指标上取得了显著提升。研究还揭示了通过优化嵌入向量和模块化设计的结合,能够有效提升可解释推荐系统的性能,同时降低模型复杂性、计算难度及资源消耗。
Abstract: Explainable recommendations have gained attention for offering both personalized suggestions and clear explanations. While Transformer-based text generation produces natural explanations, it introduces challenges of model complexity and computational cost. This paper proposed a hybrid model combining synthetic coordinates and Kolmogorov-Arnold Networks (KAN) with Transformer architecture to address these issues. By mapping users and items into a low-dimensional space and leveraging spatial relationships, the model achieved efficient rating prediction and explanation generation. Experimental results showed state-of-the-art performance, with significant improvements in BLEU-4 and USR metrics. The study demonstrates that optimizing embeddings and using a modular design enhance system performance while reducing complexity and computational difficulty and resource consumption.
文章引用:艾均, 徐戈辉, 苏湛, 苏杭. 可解释推荐的混合方法:结合合成坐标、Kolmogorov-Arnold Network和Transformer[J]. 建模与仿真, 2025, 14(5): 13-30. https://doi.org/10.12677/mos.2025.145370

参考文献

[1] Deldjoo, Y., He, Z., McAuley, J., Korikov, A., Sanner, S., Ramisa, A., et al. (2024) A Review of Modern Recommender Systems Using Generative Models (Gen-RecSys). Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Barcelona, 25-29 August 2024, 6448-6458. [Google Scholar] [CrossRef
[2] Chatti, M.A., Guesmi, M. and Muslim, A. (2024) Visualization for Recommendation Explainability: A Survey and New Perspectives. ACM Transactions on Interactive Intelligent Systems, 14, 1-40. [Google Scholar] [CrossRef
[3] Zhang, Y. and Chen, X. (2020) Explainable Recommendation: A Survey and New Perspectives. Foundations and Trends® in Information Retrieval, 14, 1-101. [Google Scholar] [CrossRef
[4] Ma, B., Yang, T. and Ren, B. (2024) A Survey on Explainable Course Recommendation Systems. In: Streitz, N.A. and Konomi, S., Eds., Distributed, Ambient and Pervasive Interactions, Springer, 273-287. [Google Scholar] [CrossRef
[5] Becker, J., Wahle, J.P., Gipp, B., et al. (2024) Text Generation: A Systematic Literature Review of Tasks, Evaluation, and Challenges.
https://arxiv.org/abs/2405.15604
[6] Lee, Y., Ka, S., Son, B., et al. (2024) Navigating the Path of Writing: Outline-Guided Text Generation with Large Language Models.
https://arxiv.org/abs/2404.13919
[7] Lin, Z., Guan, S., Zhang, W., Zhang, H., Li, Y. and Zhang, H. (2024) Towards Trustworthy LLMs: A Review on Debiasing and Dehallucinating in Large Language Models. Artificial Intelligence Review, 57, Article No. 243. [Google Scholar] [CrossRef
[8] Li, L., Zhang, Y. and Chen, L. (2021) Personalized Transformer for Explainable Recommendation. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Volume 1, 4947-4957. [Google Scholar] [CrossRef
[9] Raczyński, J., Lango, M. and Stefanowski, J. (2023) The Problem of Coherence in Natural Language Explanations of Recommendations. In: Gal, K., Nowé, A., Nalepa, G.J., et al., Eds., Frontiers in Artificial Intelligence and Applications, IOS Press, 1922-1929. [Google Scholar] [CrossRef
[10] Liu, Z., Ma, Y., Schubert, M., Ouyang, Y., Rong, W. and Xiong, Z. (2024) Multimodal Contrastive Transformer for Explainable Recommendation. IEEE Transactions on Computational Social Systems, 11, 2632-2643. [Google Scholar] [CrossRef
[11] Wang, L., Cai, Z., De Melo, G., Cao, Z. and He, L. (2023) Disentangled CVAEs with Contrastive Learning for Explainable Recommendation. Proceedings of the AAAI Conference on Artificial Intelligence, 37, 13691-13699. [Google Scholar] [CrossRef
[12] He, M., An, B., Wang, J. and Wen, H. (2023) CETD: Counterfactual Explanations by Considering Temporal Dependencies in Sequential Recommendation. Applied Sciences, 13, Article No. 11176. [Google Scholar] [CrossRef
[13] Guo, Y., Cai, F., Chen, H., Chen, C., Zhang, X. and Zhang, M. (2023) An Explainable Recommendation Method Based on Diffusion Model. 2023 9th International Conference on Big Data and Information Analytics (BigDIA), Haikou, 15-17 December 2023, 802-806. [Google Scholar] [CrossRef
[14] Li, L., Li, S., Marantika, W., et al. (2024) Diffusion-EXR: Controllable Review Generation for Explainable Recommendation via Diffusion Models.
https://arxiv.org/abs/2312.15490
[15] Minaee, S., Mikolov, T., Nikzad, N., et al. (2024) Large Language Models: A Survey.
https://arxiv.org/abs/2402.06196
[16] Radford, A., Wu, J., Child, R., et al. (2019) Language Models Are Unsupervised Multitask Learners.
https://www.semanticscholar.org/paper/Language-Models-are-Unsupervised-Multitask-Learners-Radford-Wu/9405cc0d6169988371b2755e573cc28650d14dfe
[17] Brown, T.B., Mann, B., Ryder, N., et al. (2020) Language Models Are Few-Shot Learners. In: Proceedings of the 34th International Conference on Neural Information Processing Systems, Curran Associates Inc., 1-25.
[18] OpenAI, Achiam, J., Adler, S., et al. (2024) GPT-4 Technical Report.
https://arxiv.org/abs/2303.08774
[19] Devlin, J., Chang, M., Lee, K. and Toutanova, K. (2019) BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. In: Burstein, J., Doran, C. and Solorio, T., Eds., Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Association for Computational Linguistics, 4171-4186. [Google Scholar] [CrossRef
[20] Luo, Y., Cheng, M., Zhang, H., Lu, J. and Chen, E. (2024) Unlocking the Potential of Large Language Models for Explainable Recommendations. In: Onizuka, M., et al., Eds., Database Systems for Advanced Applications, Springer, 286-303. [Google Scholar] [CrossRef
[21] Chu, Z., Wang, Y., Cui, Q., et al. (2024) LLM-Guided Multi-View Hypergraph Learning for Human-Centric Explainable Recommendation.
https://arxiv.org/abs/2401.08217
[22] Ma, Q., Ren, X. and Huang, C. (2024) XRec: Large Language Models for Explainable Recommendation. Findings of the Association for Computational Linguistics: EMNLP 2024, Miami, November 2024, 391-402. [Google Scholar] [CrossRef
[23] Yang, M., Zhu, M., Wang, Y., Chen, L., Zhao, Y., Wang, X., et al. (2024) Fine-Tuning Large Language Model Based Explainable Recommendation with Explainable Quality Reward. Proceedings of the AAAI Conference on Artificial Intelligence, 38, 9250-9259. [Google Scholar] [CrossRef
[24] Lin, C., Tsai, C., Su, S., Jwo, J., Lee, C. and Wang, X. (2023) Predictive Prompts with Joint Training of Large Language Models for Explainable Recommendation. Mathematics, 11, Article No. 4230. [Google Scholar] [CrossRef
[25] Peng, Q., Liu, H., Xu, H., et al. (2024) Review-LLM: Harnessing Large Language Models for Personalized Review Generation.
https://arxiv.org/abs/2407.07487
[26] Peng, Y., Chen, H., Lin, C., et al. (2024) Uncertainty-Aware Explainable Recommendation with Large Language Models.
https://arxiv.org/abs/2402.03366
[27] Li, L., Zhang, Y. and Chen, L. (2023) Personalized Prompt Learning for Explainable Recommendation. ACM Transactions on Information Systems, 41, 1-26. [Google Scholar] [CrossRef
[28] Lyu, Z., Wu, Y., Lai, J., Yang, M., Li, C. and Zhou, W. (2022) Knowledge Enhanced Graph Neural Networks for Explainable Recommendation. IEEE Transactions on Knowledge and Data Engineering, 35, 4954-4968. [Google Scholar] [CrossRef
[29] Bastola, R. and Shakya, S. (2024) Knowledge-Enriched Graph Convolution Network for Hybrid Explainable Recommendation from Review Texts and Reasoning Path. 2024 International Conference on Inventive Computation Technologies (ICICT), Lalitpur, 24-26 April 2024, 590-599. [Google Scholar] [CrossRef
[30] Qu, X., Wang, Y., Li, Z. and Gao, J. (2024) Graph-Enhanced Prompt Learning for Personalized Review Generation. Data Science and Engineering, 9, 309-324. [Google Scholar] [CrossRef
[31] Li, J., He, Z., Shang, J. and McAuley, J. (2023) Ucepic: Unifying Aspect Planning and Lexical Constraints for Generating Explanations in Recommendation. Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Long Beach, 6-10 August 2023, 1248-1257. [Google Scholar] [CrossRef
[32] Cheng, H., Wang, S., Lu, W., Zhang, W., Zhou, M., Lu, K., et al. (2023) Explainable Recommendation with Personalized Review Retrieval and Aspect Learning. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, Volume 1, 51-64. [Google Scholar] [CrossRef
[33] Hu, Y., Liu, Y., Miao, C., Lin, G. and Miao, Y. (2022) Aspect-Guided Syntax Graph Learning for Explainable Recommendation. IEEE Transactions on Knowledge and Data Engineering, 35, 7768-778. [Google Scholar] [CrossRef
[34] Liao, H., Wang, S., Cheng, H., Zhang, W., Zhang, J., Zhou, M., et al. (2025) Aspect-Enhanced Explainable Recommendation with Multi-Modal Contrastive Learning. ACM Transactions on Intelligent Systems and Technology, 16, 1-24. [Google Scholar] [CrossRef
[35] Yan, A., He, Z., Li, J., Zhang, T. and McAuley, J. (2023) Personalized Showcases: Generating Multi-Modal Explanations for Recommendations. Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, Taipei City, 23-27 July 2023, 2251-2255. [Google Scholar] [CrossRef
[36] Papadakis, H., Panagiotakis, C. and Fragopoulou, P. (2017) Scor: A Synthetic Coordinate Based Recommender System. Expert Systems with Applications, 79, 8-19. [Google Scholar] [CrossRef
[37] Liu, Z., Wang, Y., Vaidya, S., et al. (2024) KAN: Kolmogorov-Arnold Networks.
https://arxiv.org/abs/2404.19756
[38] Panagiotakis, C., Papadakis, H. and Fragopoulou, P. (2020) A User Training Error Based Correction Approach Combined with the Synthetic Coordinate Recommender System. Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization, Genoa, 12-18 July 2020, 11-16. [Google Scholar] [CrossRef
[39] Bodner, A.D., Tepsich, A.S., Spolski, J.N., et al. (2024) Convolutional Kolmogorov-Arnold Networks.
https://arxiv.org/abs/2406.13155
[40] Kiamari, M., Kiamari, M. and Krishnamachari, B. (2024) GKAN: Graph Kolmogorov-Arnold Networks.
https://arxiv.org/abs/2406.06470
[41] Vaswani, A., Shazeer, N., Parmar, N., et al. (2017) Attention Is All You Need. Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, 4-9 December 2017, 6000-6010.
[42] Cooper, J.F., Choi, S.J. and Buyuktahtakin, I.E. (2024) Toward Transformers: Revolutionizing the Solution of Mixed Integer Programs with Transformers.
https://arxiv.org/abs/2402.13380
[43] Ahn, K., Cheng, X., Song, M., et al. (2024) Linear Attention Is (Maybe) All You Need (to Understand Transformer Optimization.
https://arxiv.org/abs/2310.01082
[44] Hatamizadeh, A., Song, J., Liu, G., Kautz, J. and Vahdat, A. (2024) DiffiT: Diffusion Vision Transformers for Image Generation. Computer Vision-ECCV 2024, Milan, 29 September-4 October 2024, 37-55. [Google Scholar] [CrossRef
[45] Amatriain, X., Sankar, A., Bing, J., et al. (2024) Transformer Models: An Introduction and Catalog.
https://arxiv.org/abs/2302.07730
[46] Li, L., Zhang, Y. and Chen, L. (2020) Generate Neural Template Explanations for Recommendation. Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 19-23 October 2020, 755-764. [Google Scholar] [CrossRef
[47] Papineni, K., Roukos, S., Ward, T. and Zhu, W. (2001) BLEU: A Method for Automatic Evaluation of Machine Translation. Proceedings of the 40th Annual Meeting on Association for Computational Linguistics-ACL’02, Philadelphia, 6-12 July 2002, 311-318. [Google Scholar] [CrossRef
[48] Lin, C.Y. (2004) Rouge: A Package for Automatic Evaluation of Summaries. In: Text Summarization Branches Out, Association for Computational Linguistics, 74-81.
https://aclanthology.org/W04-1013
[49] Wen, B., Feng, Y., Zhang, Y. and Shah, C. (2022) Expscore: Learning Metrics for Recommendation Explanation. Proceedings of the ACM Web Conference 2022, Lyon, 25-29 April 2022, 3740-3744. [Google Scholar] [CrossRef
[50] Mnih, A. and Salakhutdinov, R.R. (2007) Probabilistic Matrix Factorization. In: Platt, J., Koller, D., Singer, Y., et al., Eds., Advances in Neural Information Processing Systems, Vol. 20, Curran Associates, Inc., 1257-1264.
https://proceedings.neurips.cc/paper_files/paper/2007/file/d7322ed717dedf1eb4e6e52a37ea7bcd-Paper.pdf
[51] Koren, Y. (2008) Factorization Meets the Neighborhood: A Multifaceted Collaborative Filtering Model. Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Las Vegas, 24-27 August 2008, 426-434. [Google Scholar] [CrossRef
[52] Li, P., Wang, Z., Ren, Z., Bing, L. and Lam, W. (2017) Neural Rating Regression with Abstractive Tips Generation for Recommendation. Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, Tokyo, 7-11 August 2017, 345-354. [Google Scholar] [CrossRef
[53] Dong, L., Huang, S., Wei, F., Lapata, M., Zhou, M. and Xu, K. (2017) Learning to Generate Product Reviews from Attributes. Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, 1, 623-632. [Google Scholar] [CrossRef
[54] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H. and Neubig, G. (2023) Pre-Train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. ACM Computing Surveys, 55, 1-35. [Google Scholar] [CrossRef