AI助手驱动社媒电商绩效反馈优化:影响机制与边界条件——基于有调节的中介分析
AI Assistant-Driven Performance Feedback Optimization for Social Media E-Commerce: Impact Mechanism and Boundary Conditions—Based on Moderated Mediation Analysis
摘要: 数字化时代的到来为人力资源管理研究提出新命题,在绩效管理的诸多构成要素中,绩效反馈环节与人工智能的接洽与优化,是我们共同的重要议题。本文以人工智能绩效反馈助手为研究对象,通过对社媒电商行业员工开展问卷调查,对比分析了“专家”与“伙伴”两种社会角色在工作情景下,员工任务类型对人机信任感的调节效应。研究结论表明:(1)“专家”型智能反馈助手对于客观任务而言,增强人机信任感的作用更大,从而促进员工接受反馈意见。(2) “伙伴”型智能反馈助手对于主观任务而言,增强人机信任感的作用更大,从而促进员工接受反馈意见。基于上述结论,本文提出了企业接入AI绩效助手的相关建议。
Abstract: The advent of the digital era has posed new challenges for human resource management research. Among the various components of performance management, the integration and optimization of performance feedback with artificial intelligence (AI) is a crucial topic for our collective discussion. This paper takes AI-based performance feedback assistants as the research subject. Through a questionnaire survey conducted among employees in the social media e-commerce industry, it compares and analyzes the moderating effect of employee task types on human-machine trust in the context of work, focusing on two social roles: “expert” and “partner”. The research findings indicate that: (1) For objective tasks, “expert” type intelligent feedback assistants have a greater impact on enhancing human-machine trust, thereby facilitating employees’ acceptance of feedback. (2) For subjective tasks, “partner” type intelligent feedback assistants have a stronger effect on enhancing human-machine trust, thus promoting employees’ acceptance of feedback. Based on these conclusions, this paper proposes relevant suggestions for enterprises to integrate AI-based performance assistants.
文章引用:徐娟, 吴继忠. AI助手驱动社媒电商绩效反馈优化:影响机制与边界条件——基于有调节的中介分析[J]. 电子商务评论, 2026, 15(2): 743-750. https://doi.org/10.12677/ecl.2026.152213

参考文献

[1] 徐娟, 吴继忠. AI助手对员工绩效反馈的影响机制研究——以社媒电商行业为例[J]. 电子商务评论, 2025, 14(12): 4443-4451.
[2] 钟智锦, 李琼. 人机互动中社交机器人的社会角色及人类的心理机制研究[J]. 学术研究, 2024(1): 18-25.
[3] Martin, K. and Waldman, A. (2022) Are Algorithmic Decisions Legitimate? The Effect of Process and Outcomes on Perceptions of Legitimacy of AI Decisions. Journal of Business Ethics, 183, 653-670. [Google Scholar] [CrossRef
[4] de Visser, E.J., Monfort, S.S., McKendrick, R., Smith, M.A.B., McKnight, P.E., Krueger, F., et al. (2016) Almost Human: Anthropomorphism Increases Trust Resilience in Cognitive Agents. Journal of Experimental Psychology: Applied, 22, 331-349. [Google Scholar] [CrossRef] [PubMed]
[5] Adam, M., Wessel, M. and Benlian, A. (2021) AI-Based Chatbots in Customer Service and Their Effects on User Compliance. Electronic Markets, 31, 427-445. [Google Scholar] [CrossRef
[6] Fraune, M.R. (2020) Our Robots, Our Team: Robot Anthropomorphism Moderates Group Effects in Human-Robot Teams. Frontiers in Psychology, 11, Article 1275. [Google Scholar] [CrossRef] [PubMed]
[7] Moussawi, S. and Koufaris, M. (2019) Perceived Intelligence and Perceived Anthropomorphism of Personal Intelligent Agents: Scale Development and Validation.
[8] Wu, J., Chen, J. and Dou, W. (2016) The Internet of Things and Interaction Style: The Effect of Smart Interaction on Brand Attachment. Journal of Marketing Management, 33, 61-75. [Google Scholar] [CrossRef
[9] Hur, J.D., Minjung, K. and Wilhelm, H. (2015) When Temptations Come Alive: How Anthropomorphism Undermines Self-Control. Journal of Consumer Research, 42, 340-358.
[10] Zhang, A. and Patrick Rau, P. (2023) Tools or Peers? Impacts of Anthropomorphism Level and Social Role on Emotional Attachment and Disclosure Tendency Towards Intelligent Agents. Computers in Human Behavior, 138, Article 107415. [Google Scholar] [CrossRef
[11] Rhee, C.E. and Choi, J. (2020) Effects of Personalization and Social Role in Voice Shopping: An Experimental Study on Product Recommendation by a Conversational Voice Agent. Computers in Human Behavior, 109, Article 106359. [Google Scholar] [CrossRef
[12] Krafft, P.M., Young, M., Katell, M., Huang, K. and Bugingo, G. (2020) Defining AI in Policy versus Practice. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, New York, 7-9 February 2020, 72-78. [Google Scholar] [CrossRef
[13] Waytz, A., Heafner, J. and Epley, N. (2014) The Mind in the Machine: Anthropomorphism Increases Trust in an Autonomous Vehicle. Journal of Experimental Social Psychology, 52, 113-117. [Google Scholar] [CrossRef
[14] Inbar, Y., Cone, J. and Gilovich, T. (2010) People’s Intuitions about Intuitive Insight and Intuitive Choice. Journal of Personality and Social Psychology, 99, 232-247. [Google Scholar] [CrossRef] [PubMed]
[15] Glikson, E. and Woolley, A.W. (2020) Human Trust in Artificial Intelligence: Review of Empirical Research. Academy of Management Annals, 14, 627-660. [Google Scholar] [CrossRef
[16] Vuong, T., Saastamoinen, M., Jacucci, G. and Ruotsalo, T. (2019) Understanding User Behavior in Naturalistic Information Search Tasks. Journal of the Association for Information Science and Technology, 70, 1248-1261. [Google Scholar] [CrossRef
[17] Kim, J. (2009) Describing and Predicting Information-Seeking Behavior on the Web. Journal of the American Society for Information Science and Technology, 60, 679-693. [Google Scholar] [CrossRef
[18] Logg, J.M., Minson, J.A. and Moore, D.A. (2019) Algorithm Appreciation: People Prefer Algorithmic to Human Judgment. Organizational Behavior and Human Decision Processes, 151, 90-103. [Google Scholar] [CrossRef
[19] Yeomans, M., Shah, A., Mullainathan, S. and Kleinberg, J. (2019) Making Sense of Recommendations. Journal of Behavioral Decision Making, 32, 403-414. [Google Scholar] [CrossRef
[20] Castelo, N., Bos, M.W. and Lehmann, D.R. (2019) Task-Dependent Algorithm Aversion. Journal of Marketing Research, 56, 809-825. [Google Scholar] [CrossRef
[21] Choung, H., David, P. and Ross, A. (2023) Trust in AI and Its Role in the Acceptance of AI Technologies. International Journal of Human-Computer Interaction, 39, 1727-1739. [Google Scholar] [CrossRef
[22] Hu, P., Lu, Y. and Gong, Y. (2021) Dual Humanness and Trust in Conversational AI: A Person-Centered Approach. Computers in Human Behavior, 119, Article 106727. [Google Scholar] [CrossRef
[23] 吴继飞, 于洪彦, 朱翊敏, 等. 人工智能推荐对消费者采纳意愿的影响[J]. 管理科学, 2020, 33(5): 29-43.
[24] Preacher, K.J., Rucker, D.D. and Hayes, A.F. (2007) Addressing Moderated Mediation Hypotheses: Theory, Methods, and Prescriptions. Multivariate Behavioral Research, 42, 185-227. [Google Scholar] [CrossRef] [PubMed]
[25] Hayes, A. (2013) Introduction to Mediation, Moderation, and Conditional Process Analysis. Journal of Educational Measurement, 51, 335-337.