有情感的机器人:人造代言的情绪表达与交流
Robots with Emotions: Emotional Expression and Communication of Artificial Agents
DOI: 10.12677/ap.2024.145305, PDF, HTML, XML, 下载: 26  浏览: 70 
作者: 田倍嘉, 刘宏艳:浙江理工大学心理学系,浙江 杭州;胡治国:杭州师范大学心理科学研究院,浙江 杭州
关键词: 机器人虚拟代言人情绪识别情感化交流Robot Virtual Agent Emotion Recognition Affective Communication
摘要: 人造代言人正在融入人类的生活,本文从人造代言情绪表达的主要方式(面部和姿势表情)、人类对人造代言的情绪识别/评价及影响因素两个方面,对人造代言与人类的情感化交流的研究现状进行了探讨。未来研究可以从情绪类型多样化、情绪和非情绪线索整合、目标用户和情境适用性等方面进一步深化。
Abstract: Artificial agents are integrating into human life. The affective communication between artificial agents and human beings was discussed from two aspects: the main ways to accomplish emotional expression of artificial agents (i.e., facial expression, body expression), emotion recognition and evaluation of artificial agents by human and the corresponding influencing factors. The future research is prospected from three aspects: the diversification of emotion types, the integration of emotional and non-emotional cues, and the applicability of different user groups and situations.
文章引用:田倍嘉, 胡治国, 刘宏艳 (2024). 有情感的机器人:人造代言的情绪表达与交流. 心理学进展, 14(5), 212-220. https://doi.org/10.12677/ap.2024.145305

1. 引言

科技不断地改变着人类的社交环境,越来越多的人造代言人正在进入人类的生活。代言通常指具有自主性、主动的、可反应的、拥有社交能力的硬件或软件计算系统(Beer, Fisk, Rogers, 2010)。机器人和虚拟代言人都可以归类为代言。机器人为实体计算机代言,虚拟代言人则通过2D/3D数字化形式呈现(Hortensius,Hekele,Cross,2018,见图1)。人造代言已被用于医疗看护、个人协助、教学指导等多个领域,但人造代言在各个领域是否能够成功,还取决于人们是否喜欢它们并与之互动。研究表明,人类对人造代言人的反应和人与人之间的互动类似,情感化交流是促进交际的关键因素(Hortensius et al., 2018; Krämer, Kopp, Becker-Asano, Sommer, 2013)。这就需要人造代言不仅是有智力的,还应该具备产生和表达情绪的能力。研究证明,智能人造代言人已经有能力在动态交谈中表达情感,但情绪表达的情境对应性还有待提高(Youssef et al., 2015)。本文总结了近年来人造代言人情绪表达和识别方面的研究,并在此基础上提出了未来研究展望。

Figure 1. 1. Artificial Agents. A Virtual Agents: a1 Boy Agent Billie (Hosseinpanah et al., 2018), a2 Female Agent (Ochs et al., 2010), a3 (Fabri et al., 2002) and a4 Male Agents (Perugia et al., 2021). B Humanoid Robot Agents: b1 Female Android Robot (Ishi et al., 2019), b2 Social Robot Ryan (Mollahosseini et al., 2018), b3 Robot Pepper (Wolfert et al., 2022), b4 NAO Robot (Ceha et al., 2019), b5 Robot BERT2 (Bazo et al., 2010). C Non-Humanoid Agents: c1 Peoplebot Health Care Robot (Broadbent et al., 2013), c2 Lego Robot (Novikova & Watts, 2014), c3 iCat (Beer et al., 2009), c4 Virtual Agent Chick (Numata et al., 2020)

图1. 人造代言人。A虚拟代言人:a1男孩代言人Billie (Hosseinpanah et al., 2018),a2 (Ochs et al., 2010)女性代言人,a3 (Fabri et al., 2002)和a4 (Perugia et al., 2021)男性代言人。B人形机器人代言:b1 (Ishi et al., 2019)女性android机器人,b2 (Mollahosseini et al., 2018)社交机器人Ryan,b3 (Wolfert et al., 2022)机器人Pepper,b4 (Ceha et al., 2019) NAO机器人,b5 (Bazo et al., 2010)机器人BERT2。C非人形代言人:c1 (Broadbent et al., 2013) Peoplebot健康护理机器人,c2 (Novikova & Watts, 2014)乐高机器人,c3 (Beer et al., 2009)为iCat,c4 (Numata et al., 2020)虚拟代言人Chick

2. 人造代言人的情绪表达

人造代言人的设计尝试通过多种途径来实现情绪表达,其中面部和姿势表情是最常见的两种方式。

2.1. 面部表情的制作

面部表情是最直观的情绪表现形式,通过眼部、颜部和嘴部的肌肉变化来展现各种情绪状态(彭聃龄,2018)。好的人造代言会通过有效的面部表情传达目标信息。

人造代言的情感计算模型有的基于心理学家Ekman等的工作构建六种基本表情:快乐、愤怒、厌恶、恐惧、悲伤和吃惊;有的基于维度理论从效价和唤醒度构建表情,如20%的微弱情绪;还有的基于评价理论构建表情生成过程,涵盖从出现、维持到消失的完整过程(Pelachaud, 2009)。

也有研究者开发算法用于生成更复杂的面部表情,如掩饰或虚假表情。以微笑为例,微笑是最简单、最容易识别的面部表情,只需颧大肌的活动即可展现。人造代言人通常会通过微笑来表达积极情绪或表示友好(Ochs, Niewiadomski, Pelachaud, 2010)。然而有的微笑仅仅是礼貌性的笑,也有微笑发生在消极场景中,如焦虑的笑。Ochs等(2010)基于形态学特征(包括AU6面颊上提、AU24唇印、AU12颧大肌、唇角对称性、嘴部开合度和微笑幅度等)和动态性特征(持续时间、开始和消失速度)构建了虚拟代言人的三种笑:开心的、礼貌的和尴尬的笑。Rehm和André (2005)为虚拟代言人设计发自内心的微笑和掩盖式微笑(掩盖真实的厌恶、愤怒、恐惧或悲伤)。结果发现,用户能够感知到两种微笑的差别,他们评价真笑的虚拟代言人更可靠/可信,对其所说的话更认可。Krumhuber,Manstead,Cosker,Marshall,Rosin (2008)通过不同的动态特征构建了虚拟代言人的真笑和假笑,其中真笑的出现和消失时间更长,假笑的出现和消失时间较短。结果发现,用户普遍认为展现真笑的虚拟人工作更积极、更恰当。

近年来,还有很多影响因素也被考虑到人造代言的面部表情表达之中,如表情的展现频率。Krämer等(2013)让用户与三种虚拟代言人交谈:完全不微笑、偶尔微笑、经常微笑。结果表明,当虚拟代言人微笑时,与之交谈的用户的微笑时间会更长,但在偶尔和经常微笑两种条件下没有差异。再比如表达性皱纹(一种动态皱纹,是由肌肉的收缩而引发的皮肤变形,与情绪反应相关联)和瞳孔直径变化(可在感知他人情绪状态时发生变化,其收缩与厌恶、愤怒、悲伤相关,其扩张与快乐、吃惊、恐惧相关)。Milcent, Geslin, Kadri和Richirq (2019)发现,表达性皱纹会影响对虚拟代言人的情绪识别,但瞳孔直径没有影响。人造代言是否与人类共处同一空间也产生影响。Mollahosseini,Abdollahi,Sweeny,Cole和Mahoor (2018)通过设置三种代言人(实体机器人、机器人的远程电话投影、虚拟代言人)发现,某些特定表情的感知正确率在实体机器人条件下显著高于2D屏幕呈现条件。

2.2. 姿势表情的开发

姿势表情也是情绪的表达方式之一,对于没有面部肌肉的机器人,姿势表情就尤其成为了情绪表达的主要通道。

研究者尝试通过身体不同部位的运动或不同形式的运动来构建姿势表情。Novikova和Watts (2014)基于趋近–回避行为开发了非人形机器人的情绪动作,这些动作还包含了形状(改变与观察者的距离或是改变自身大小)和努力(动作的质量,如流畅度)两个属性,实验证明这些动作可有效传递愤怒、悲伤、恐惧、快乐和吃惊的效价、唤醒度和主导性等信息。Van de Perre等(2018)通过手臂动作和身体姿势构建了机器人的三种姿势表情(快乐、恐惧、悲伤),该算法还可以实现姿势表情和指示性动作的混合,将指示或接触动作融入情绪姿势,从而改变动作速度和姿势幅度。Randhavane等(2019)的实时算法让虚拟代言人通过注视和步态来传递快乐、悲伤、愤怒和平静。Ishi, Minato和Ishiguro (2019)开发了机器人的大笑算法,包括了不同部位的脸部、头部和上半身的活动,具体表现为眼睑变窄、唇角/脸颊抬高、眨眼、头部和上半身俯仰动作。Beck,Stevens,Bard和Cañamero (2012)对人类演员进行动作捕捉,对嘴唇的位置进行3D重构,通过动画将动作数据与虚拟代言人进行拟合,生成机器人NAO的动作表情。该制作使情绪的识别率提高,而且可用于生成机器人的混合情绪,如快乐–自豪。

人造代言人还会伴随语言展现姿势,如言语伴随手势(Wolfert, Robinson, & Belpaeme, 2022)。Maha,Friederike,Katharina,Stefan和Frank (2013)设置了搬家场景,机器人会告诉被试物品的摆放位置并予以协助。实验中一种机器人仅讲述信息,不移动、不做出任何姿势或眼睛注视的改变;另一种机器人边讲述边做出21种伴随姿势(如指、转头注视),这些姿势或者与讲述内容完全一致,或者偶尔不一致。结果发现,被试认为言语与姿势偶尔不一致的机器人更拟人化、更可爱、更具有分享性,对它表现出最高的未来合作意愿。这可能是因为能做出伴随性姿势但偶尔犯错的机器人更像真实的人类。

姿势表情也受到了很多因素的影响。Xu,Broekens,Hindriks和Neerincx (2013)发现,运动速度会影响效价和唤醒度的感知:快速动作被认为更积极、唤醒度更高,而缓慢动作被认为更消极、唤醒度更低。在Tsiourti,Weiss,Wac和Vincze (2019)的研究中,机器人会在观看电影片段后表现出三种姿势:快乐姿势(头部和躯干笔直,手臂向上和向侧面伸展)、悲伤姿势(头部和躯干前倾,注视地面,双手交叉于腹部)、吃惊姿势(头部和躯干向后弯,向上伸展双臂)。姿势会发生在场景不一致(机器人的反应与影片情境冲突)和跨通道不一致(机器人展现的言语信息与姿势表情冲突)两种情境下,结果发现,不一致情境会降低被试对机器人的信任度、可爱度和智力的感知,降低对机器人真实情感的识别。Beck,Cañamero和Bard (2010)发现了头部位置的影响,抬头会增强对自豪、快乐和激动姿势的感知,而低头增强了对愤怒和悲伤姿势的感知。

3. 人类对人造代言人的情绪识别

人造代言人能够生成情绪是情感化人–机交流发生的基础,但成功的人–机互动还取决于人类是否能够准确识别人造代言人的情绪。

3.1. 对人造代言人的情绪识别及评价

3.1.1. 人类对人造代言人的情绪识别

大量研究表明,人们对人类表情识别的规律可以应用到数字代言人中,数字代言人可以通过仿现实的情绪行为与不同的人类群体进行交流(Jack & Schyns, 2017)。Spencer-Smith等(2001)和Fabri,Moore和Hobbs (2002)均发现,人们对真实的和虚拟/合成人脸的基本表情(除了厌恶)的识别无显著差别。Miwa,Itoh,Ito,Takanobu和Takanishi (2004)通过头部(面部表情)和手臂(手部、臂部动作)构建了情感机器人WE-4R的情绪。结果发现,人们能够高效地识别机器人表达的基本情绪,恐惧识别率略低。Dyck等(2008)发现,40岁以下的人对虚拟人和真人的基本表情识别无显著差异。Joyal,Jacob,Cigna,Guay和Renaud (2014)对比了真实人类和虚拟人的中性及六种基本表情,记录了面部肌电图(颧大肌和皱眉肌)和眼睛、嘴部的注视点。结果发现,人类和虚拟人的所有表情识别都表现出了相似的识别率、面部活动和注视时间。Lazzeri等(2015)甚至发现,人们对机器人的愤怒、厌恶和恐惧的识别要好于对人类表情。

但也有研究表明,人们对人造代言的表情识别率较低。Rizzo,Neumann,Enciso,Fidaleo和Noh (2001)让人们评价3D化身的六种基本表情、困惑、挫折和殷勤。结果发现,人们对虚拟人的恐惧、愤怒、悲伤、厌恶和困惑的识别率很低。Bazo,Vaidyanathan,Lentz和Melhuish (2010)发现,人们对BERT2机器人的快乐、吃惊和悲伤能很好识别,但对厌恶和恐惧易发生混淆。也有研究发现,人造代言的恐惧和吃惊容易混淆(Lazzeri et al., 2015)。在Shayganfar,Rich和Sidner (2012)开发的人形机器人表情算法中,静态和动态恐惧表情的识别率都较低。这些结果可能受限于不同的人造代言人类型以及表情生成技术。

3.1.2. 人类对人造代言人的喜好评价

能够表达情绪的人造代言人会更被人类所喜欢吗?很多研究对此提供了支持。Krämer,Simons和Kopp (2007)让虚拟对话代言人Max与人交流时,或者表现出伴随言语内容的姿势(自我触摸、动眼眉),或者不表现出任何姿势改变,结果表明,人们对展现不同情绪姿势的Max的评价更积极。Pais,Argall和Billard (2013)让人们对机器人进行训练。他们设置了四种人–机反馈:语言、图形化界面、面部表情、无反馈。结果发现,机器人的表情反馈(快乐、满意、懊恼)会使人们对训练的主观评价上升,训练后对机器人的满意度更高。Mattheij,Postma-Nilsenová和Postma (2015)发现,自发的表情模仿(快乐、吃惊、厌恶)会发生在人–机交互中。在Philip,Martin & Clavel (2018)的研究中,人会自发模仿虚拟代言人的快乐、愤怒和悲伤表情,但模仿强度有所降低。Costa,Brunete,Bae和Mavridis (2018)发现,人造讲述者的面部表情可以诱发听讲者的同步表情变化,产生共情。Hamacher,Bianchi-Berthouze,Pipe,Kerstin (2016)为BERT2机器人设置了三种表现:工作完美但无情绪表达,工作会犯错且会道歉,工作会犯错、会道歉且会表达伤心。结果发现,人们更愿意接受工作中有瑕疵但拥有情感表达的机器人。这表明当机器人有恰当的情感行为时,人们更能容忍它们的错误发生。

Hortensius等(2018)指出,从设计角度来看,情绪反应是人与机器人互动中最常见的反馈方式,很多人会将这些反应看作是真诚的情绪反馈,从而有效增进人–机互动。

3.2. 识别人造代言人情绪的影响因素

人们是否能够正确识别人造代言人的情绪还受到了多种因素的影响。

3.2.1. 人造代言人外观设计的影响

人造代言人的外观设计是重要的影响因素,如面部外表。Broadbent等(2013)将Peoplebot健康护理机器人头部屏幕显示设为三种类型:3D虚拟人类面孔、银色金属质感人类面孔,这两类面孔可以表达情绪,控制条件为无面孔(显示屏)。机器人会协助被试完成血压测试。结果表明,人们更喜欢拥有仿人表情的3D虚拟人脸的机器人,评价其更有活力、更像人、更社会化、更和蔼可亲。

也有研究关注了整体外表。Numata等(2020)研究了人们对虚拟代言人小鸡的情绪识别。实验中被试微笑地或平静地看着小鸡,这时小鸡会回应以积极、消极或中性表情。结果发现,当小鸡对被试展露积极表情时,被试会报告积极的情绪体验。这表明非人形代言人也可以与人类进行情感交互。但也有研究(Beer, Smarr, Fisk, & Rogers, 2015)发现,人们对人类表情的识别率最高,其次是人造人,最后是非人形机器人iCat。

实体性也有影响。Lazzeri等(2015)和Li (2015)均发现,对实体机器人的表情识别优于该机器人在屏幕上的2D、3D、虚拟形象的表情识别,实体机器人被认为更积极、更有说服力,传递了更好的用户体验。Hofree,Ruvolo,Bartlett和Winkielman (2014)让被试或者与人形机器人在同一房间,或者在屏幕上观看该机器人,结果发现,当机器人在现场时人类会表现出更强烈的同步表情模仿,此时人们也认为该机器人更像人。

3.2.2. 人造代言人表情设计的影响

表情的细节设计也有影响。Nadel等(2006)给成人和3岁孩子呈现静态或动态的机器人表情。结果表明,人们对真人表情的识别更容易,动态性会增加对机器人和人类表情的共鸣反应,成人对静态表情的识别率更高。Ruijten,Midden和Ham (2013)发现,当注视方向和情绪表达特征一致时(如愤怒–直视),人们对虚拟代言人的情绪识别更快,评价其更可信。Milcent等(2019)给被试呈现四种代言人的表情视频:真实人类、虚拟人(伴随瞳孔变化、展现情绪时有表达性皱纹)、虚拟人(没有瞳孔变化)、虚拟人(没有表达性皱纹)。结果发现,对真人的愤怒和吃惊识别优于对虚拟人,但对没有表达性皱纹的虚拟人的恐惧和悲伤识别却好于对真人。

3.2.3. 人类用户自身特征的影响

除了人造代言设计的影响,还有来源于人类用户特征的影响。

年龄是重要的影响因素。Dyck等(2008)发现,40岁以上的人对虚拟人的厌恶表情识别低于对人类。Beer,Fisk和Rogers (2009)发现,年长者对虚拟代言人的愤怒、恐惧和快乐的识别率降低。他们的另一项研究(Beer et al., 2010)发现,年长者会错误标识人类和人造人的愤怒、恐惧、悲伤和中性表情,错误标识虚拟代言人的愤怒、恐惧、快乐和中性表情。Beer等(2015)发现,年长者对人造代言的动态表情识别率低于对静态面孔。Numata,Asa,Kitagaki,Hashimoto和Karasawa (2019)发现,年长被试对虚拟小鸡Piyota的六种基本表情和同情的辨认率降低,且除了同情外,老年人对其它表情表现出高分散度的错误匹配。Hosseinpanah,Krämer和Straβmann (2018)让代言人Billie通过表情和动作表达快乐(微笑、点头、微笑 + 点头)和悲伤(悲伤、垂头、手臂下垂 + 悲伤),结果发现和Billie互动后,年长者(相比于年轻者)认为Billie更具有同理心、更可信。

个体的任务目标、精神状态也有影响。Perugia等(2021)发现,当给被试设置了情绪再认的任务目标后,实体机器人诱发的自发模仿最少(少于虚拟代言人),与以往结果不一致,此时自发模仿最少的代言人反而被认为最自然、最拟人化和最受欢迎。Raffard等(2016)发现,精神分裂症患者的阴性症状与机器人和人类的负性表情识别呈负相关。

文化差异也得到了研究。Becker-Asano和Ishiguro (2011)将机器人Geminoid F的表情呈现给英语、德语和日语使用者,结果发现,恐惧和吃惊表情容易混淆,日语使用者容易混淆愤怒和悲伤。

未来研究可进一步考虑知觉/教育水平、技术经验等的影响,这些因素都与新的人工智能技术的识别和认可有关。

4. 未来研究展望

未来的研究可以从以下几个方面予以拓展:

(1) 人类的情绪除了基本情绪外,还涉及更为复杂的社会情绪(如困惑)、虚假情绪(如假笑)、违背道德的情绪(如幸灾乐祸)和微妙情绪(如微表情)等多种类型。目前人造代言人的情绪设计主要集中在基本情绪上,其它几种类型较少涉及,未来需进一步探索。

(2) 人类的情绪表达是语言和非语言线索的有机整合,这包括多通道多感官的信息,如嗓音、面孔、姿势等。已有研究仅关注了单一线索或某几种线索的组合,未来研究需更系统地探索如何将多种情绪线索融合在人造代言人的情绪表达中。此外,非表情线索(如颜色、亮度、场景)也可以传递情绪,未来研究还需考虑这些因素与情绪线索的交互影响。

(3) 人造代言人的种类繁多,不同种类的机器人有不同的情绪表达算法,同时也有来自目标用户的不同需求。未来研究应针对特定的人造代言人进行更有针对性地探索,完善人造代言人在特定用户群体和特定情境下的情绪表达需求。

参考文献

[1] 彭聃龄(2018). 普通心理学. 北京师范大学出版社.
[2] Bazo, D., Vaidyanathan, R., Lentz, A., & Melhuish, C. (2010). Design and Testing of a Hybrid Expressive Face for a Humanoid Robot. In IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 5317-5322). IEEE.
https://doi.org/10.1109/IROS.2010.5651469
[3] Beck, A., Cañamero, L., & Bard, K. A. (2010). Towards an Affect Space for Robots to Display Emotional Body Language. In 19th IEEE International Symposium on Robot and Human Interactive Communication (pp. 12-15). IEEE.
https://doi.org/10.1109/ROMAN.2010.5598649
[4] Beck, A., Stevens, B., Bard, K. A., &Cañamero, L. (2012). Emotional Body Language Displayed by Artificial Agents. ACM Transactions on Interactive Intelligent Systems, 2, 1-29.
https://doi.org/10.1145/2133366.2133368
[5] Becker-Asano, C., & Ishiguro, H. (2011). Evaluating Facial Displays of Emotion for the Android Robot Geminoid F. In 2011 IEEE Workshop on Affective Computational Intelligence (WACI) (pp. 1-8). IEEE.
https://doi.org/10.1109/WACI.2011.5953147
[6] Beer, J. M., Fisk, A. D., & Rogers, W. A. (2009). Emotion Recognition of Virtual Agents Facial Expressions: The Effects of Age and Emotion Intensity. Proceeding of the Human Factors and Ergonomics Society Annual Meeting, 53, 131-135.
https://doi.org/10.1177/154193120905300205
[7] Beer, J. M., Fisk, A. D., & Rogers, W. A. (2010). Recognizing Emotion in Virtual Agent, Synthetic Human, and Human Facial Expressions. Human Factors & Ergonomics Society Annual Meeting Proceedings, 54, 2388-2392.
https://doi.org/10.1177/154193121005402806
[8] Beer, J. M., Smarr, C. A., Fisk, A. D., & Rogers, W. A. (2015). Younger and Older Users’ Recognition of Virtual Agent Facial Expressions. International Journal of Human-Computer Studies, 75, 1-20.
https://doi.org/10.1016/j.ijhcs.2014.11.005
[9] Broadbent, E., Kumar, V., Li, X., Sollers III, J., Stafford, R. Q., & Wegner, D. M. (2013). Robots with Display Screens: A Robot with a More Humanlike Face Display Is Perceived to Have More Mind and a Better Personality. PLOS ONE, 8, e72589.
https://doi.org/10.1371/journal.pone.0072589
[10] Ceha, J., Chhibber, N., Goh, J., McDonald, C., Oudeyer, P., Kulić, D., & Law, E. (2019). Expression of Curiosity in Social Robots: Design, Perception, and Effects on Behaviour. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1-12). Association for Computing Machinery.
https://doi.org/10.1145/3290605.3300636
[11] Costa, S., Brunete, A., Bae, B. C., & Mavridis, N. (2018). Emotional Storytelling Using Virtual and Robotic Agents. International Journal of Humanoid Robotics, 15, Article ID: 185006.
https://doi.org/10.1142/S0219843618500068
[12] Dyck, M., Winbeck, M., Leiberg, S., Chen, Y., Gur, R. C., & Mathiak, K. (2008). Recognition Profile of Emotions in Natural and Virtual Faces. PLOS ONE, 3, e3628.
https://doi.org/10.1371/journal.pone.0003628
[13] Fabri, M., Moore, D. J., & Hobbs, D. (2002). Expressive Agents: Non-Verbal Communication in Collaborative Virtual Environments.
https://www.researchgate.net/publication/238689318
[14] Hamacher, A., Bianchi-Berthouze, N., Pipe, A. G., & Kerstin, E. (2016). Believing in Bert: Using Expressive Communication to Enhance Trust and Counteract Operational Error in Physical Human-Robot Interaction. In IEEE International Symposium on Robot and Human Interactive Communication (pp. 493-500). IEEE.
https://doi.org/10.1109/ROMAN.2016.7745163
[15] Hofree, G., Ruvolo, P., Bartlett, M. S., &Winkielman, P. (2014). Bridging the Mechanical and the Human Mind: Spontaneous Mimicry of a Physically Present Android. PLOS ONE, 9, e99934.
https://doi.org/10.1371/journal.pone.0099934
[16] Hortensius, R., Hekele, F., & Cross, E. S. (2018). The Perception of Emotion in Artificial Agents. IEEE Transactions on Cognitive and Developmental Systems, 10, 852-864.
https://doi.org/10.1109/TCDS.2018.2826921
[17] Hosseinpanah, A., Krämer, N. C., & Straβmann, C. (2018). Empathy for Everyone? The Effect Of age When Evaluating a Virtual Agent. HAI18: Proceedings of the 6th International Conference on Human-Agent Interaction, 15-18 December 2018, 184-190.
https://doi.org/10.1145/3284432.3284442
[18] Ishi, C. T., Minato, T., & Ishiguro, H. (2019). Analysis and Generation of Laughter Motions, and Evaluation in an Android Robot. APSIPA Transactions on Signal and Information Processing, 8, e6.
https://doi.org/10.1017/ATSIP.2018.32
[19] Jack, R. E., & Schyns, P. G. (2017). Toward a Social Psychophysics of Face Communication. Annual Review of Psychology, 68, 269-297.
https://doi.org/10.1146/annurev-psych-010416-044242
[20] Joyal, C. C., Jacob, L., Cigna, M. H., Guay, J. P., & Renaud, P. (2014). Virtual Faces Expressing Emotions: An Initial Concomitant and Construct Validity Study. Frontiers in Human Neuroscience, 8, Article 96929.
https://doi.org/10.3389/fnhum.2014.00787
[21] Krämer, N. C., Simons, N., & Kopp, S. (2007). The Effects of an Embodied Agent’S Nonverbal Behavior on User’s Evaluation and Behavioural Mimicry. In C., Pelachaud, J. C., Martin, E., André, G., Chollet, K. Karpouzis, & D. Pelé (Eds.), Intelligent Virtual Agents (pp. 238-251). Springer.
https://doi.org/10.1007/978-3-540-74997-4_22
[22] Krämer, N., Kopp, S., Becker-Asano, C., & Sommer, N. (2013). Smile and the World Will Smile with You—The Effects of a Virtual Agent’s Smile on Users’ Evaluation and Behavior. International Journal of Human-Computer Studies, 71, 335-349.
https://doi.org/10.1016/j.ijhcs.2012.09.006
[23] Krumhuber, E., Manstead, A., Cosker, D., Marshall, D., & Rosin, P. L. (2008). Effects of Dynamic Attributes of Smiles in Human and Synthetic Faces: A Simulated Job Interview Setting. Journal of Nonverbal Behavior, 33, 1-15.
https://doi.org/10.1007/s10919-008-0056-8
[24] Lazzeri, N., Mazzei, D., Greco, A., Rotesi, A., Lanatà, A., & De Rossi, D. E. (2015). Can a Humanoid Face Be Expressive? A Psychophysiological Investigation. Frontiers in Bioengineering and Biotechnology, 3, Article 64.
https://doi.org/10.3389/fbioe.2015.00064
[25] Li, J. (2015). The Benefit of Being Physically Present: A Survey of Experimental Works Comparing Copresent Robots, Telepresent Robots and Virtual Agents. International Journal of Human-Computer Studies, 77, 23-37.
https://doi.org/10.1016/j.ijhcs.2015.01.001
[26] Maha, S., Friederike, E., Katharina, R., Stefan, K., & Frank, J. (2013). To Err Is Human(-Like): Effects of Robot Gesture on Perceived Anthropomorphism and Likability. International Journal of Social Robotics, 5, 313-323.
https://doi.org/10.1007/s12369-013-0196-9
[27] Mattheij, R., Postma-Nilsenová, M., & Postma, E. (2015). Mirror Mirror on the Wall. Journal of Ambient Intelligence and Smart Environments, 7, 121-132.
https://doi.org/10.3233/AIS-150311
[28] Milcent, A., Geslin, E., Kadri, A., & Richir, S. (2019). Expressive Virtual Human: Impact of Expressive Wrinkles and Pupillary Size on Emotion Recognition. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents (pp. 215-217). Association for Computing Machinery.
https://doi.org/10.1145/3308532.3329446
[29] Miwa, H., Itoh, K., Ito, D., Takanobu, H., & Takanishi, A. (2004). Design and Control of 9-Dofs Emotion Expression Humanoid Arm. In IEEE International Conference on Robotics and Automation (pp. 128-133). IEEE.
https://doi.org/10.1109/ROBOT.2004.1307140
[30] Mollahosseini, A., Abdollahi, H., Sweeny, T. D., Cole, R., & Mahoor, M. H. (2018). Role of Embodiment and Presence in Human Perception of Robots’ Facial Cues. International Journal of Human-Computer Studies, 116, 25-39.
https://doi.org/10.1016/j.ijhcs.2018.04.005
[31] Nadel, J., Simon, M., Canet, P., Soussignan, R., Blancard, P., Cañamero, L., & Gaussier, P. (2006). Human Responses to an Expressive Robot. Proceedings of the Sixth International Workshop on Epigenetic Robotics, 128, 79-86.
[32] Novikova, J., & Watts, L. (2014). A Design Model of Emotional Body Expressions in Non-Humanoid Robots. In 2nd International Conference on Human-Agent Interaction (pp. 353-360). Association for Computing Machinery.
https://doi.org/10.1145/2658861.2658892
[33] Numata, T., Asa, Y., Kitagaki, T., Hashimoto, T., & Karasawa, K. (2019). Young and Elderly Users’ Emotion Recognition of Dynamically Formed Expressions Made by a Non-Human Virtual Agent. In Proceedings of the 7th International Conference on Human-Agent Interaction (pp. 253-255). Association for Computing Machinery.
https://doi.org/10.1145/3349537.3352783
[34] Numata, T., Sato, H., Asa, Y., Koike, T., Miyata, K., Nakagawa, E., & Sadato, N. (2020). Achieving Affective Human-Virtual Agent Communication by Enabling Virtual Agents to Imitate Positive Expressions. Scientific Reports, 10, Article No. 5977.
https://doi.org/10.1038/s41598-020-62870-7
[35] Ochs, M., Niewiadomski, R., & Pelachaud, C. (2010). How a Virtual Agent Should Smile? Morphological and Dynamic Characteristics of Virtual Agent’S Smiles. In J., Allbeck, N., Badler, T., Bickmore, C., Pelachaud, & A. Safonova (Eds.), Intelligent Virtual Agents (pp. 427-440). Springer.
https://doi.org/10.1007/978-3-642-15892-6_47
[36] Pais, A. L., Argall, B. D., & Billard, A. G. (2013). Assessing Interaction Dynamics in the Context of Robot Programming by Demonstration. International Journal of Social Robotics, 5, 477-490.
https://doi.org/10.1007/s12369-013-0204-0
[37] Pelachaud, C. (2009). Modelling Multimodal Expression of Emotion in a Virtual Agent. Philosophical Transactions Biological Sciences, 364, 3539-3548.
https://doi.org/10.1098/rstb.2009.0186
[38] Perugia, G., Paetzel-Prüssman, M., Hupont, I., Varni, G., Chetouani, M., Peters, C. E., & Castellano, G. (2021). Does the Goal Matter? Emotion Recognition Tasks Can Change the Social Value of Facial Mimicry towards Artificial Agents. Frontiers in Robotics and AI, 8, Article 699090.
https://doi.org/10.3389/frobt.2021.699090
[39] Philip, L., Martin, J. C., & Clavel, C. (2018). Rapid Facial Reactions in Response to Facial Expressions of Emotion Displayed by Real versus Virtual Faces. i-Perception, 9, 1-18.
https://doi.org/10.1177/2041669518786527
[40] Raffard, S., Bortolon, C., Khoramshahi, M., Salesse, R. N., Burca, M., Marin, L., & Capdevielle, D. (2016). Humanoid Robots versus Humans: How Is Emotional Valence of Facial Expressions Recognized by Individuals with Schizophrenia? An Exploratory Study. Schizophrenia Research, 176, 506-513.
https://doi.org/10.1016/j.schres.2016.06.001
[41] Randhavane, T., Bera, A., Kapsaskis, K., Sheth, R., Gray, K., & Manocha, D. (2019). EVA: Generating Emotional Behavior of Virtual Agents Using Expressive Features of Gait and Gaze. In ACM Symposium on Applied Perception 2019 (pp. 1-10). Association for Computing Machinery.
https://doi.org/10.1145/3343036.3343129
[42] Rehm, M., & André, E. (2005). Catch Me If You Can: Exploring Lying Agents in Social Settings. In Proceedings of International Joint Conference on Autonomous Agents and Multiagent Systems (pp. 937-944). Association for Computing Machinery.
https://doi.org/10.1145/1082473.1082615
[43] Rizzo, A. A., Neumann, U., Enciso, R., Fidaleo, D., & Noh, J. Y. (2001). Performance-Driven Facial Animation: Basic Research on Human Judgments of Emotional State in Facial Avatars. Cyberpsychology & Behavior, 4, 471-487.
https://doi.org/10.1089/109493101750527033
[44] Ruijten, P. A. M., Midden, C. J. H., & Ham, J. (2013). I Didn’t Know That Virtual Agent Was Angry at Me: Investigating Effects of Gaze Direction on Emotion Recognition and Evaluation. In S., Berkovsky, & J. Freyne (Eds.), Persuasive Technology (pp. 192-197). Springer.
https://doi.org/10.1007/978-3-642-37157-8_23
[45] Shayganfar, M., Rich, C., & Sidner, C. L. (2012). A Design Methodology for Expressing Emotion on Robot Faces. In IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 4577-4583). IEEE.
https://doi.org/10.1109/IROS.2012.6385901
[46] Spencer-Smith, J., Wild, H., Innes-Ker, Å. H., Townsend, J., Duffy, C., Edwards, C., Paik, J. W. et al. (2001). Making Faces: Creating Three-Dimensional Parameterized Models of Facial Expression. Behavior Research Methods, Instruments, & Computers, 33, 115-123.
https://doi.org/10.3758/BF03195356
[47] Tsiourti, C., Weiss, A., Wac, K., & Vincze, M. (2019). Multimodal Integration of Emotional Signals from Voice, Body, and Context: Effects of (in) Congruence on Emotion Recognition and Attitudes towards Robots. International Journal of Social Robotics, 11, 555-573.
https://doi.org/10.1007/s12369-019-00524-z
[48] Van de Perre, G., Cao, H. L, De Beir, A., Esteban, P. G., Lefeber, D., & Vanderborght, B. (2018). Generic Method for Generating Blended Gestures and Affective Functional Behaviors for Social Robots. Autonomous Robots, 42, 569-580.
https://doi.org/10.1007/s10514-017-9650-0
[49] Wolfert, P., Robinson, N., & Belpaeme, T. (2022). A Review of Evaluation Practices of Gesture Generation in Embodied Conversational Agents. IEEE Transactions on Human-Machine Systems, 52, 379-389.
[50] Xu, J., Broekens, J., Hindriks, K, V., &Neerincx, M. (2013). The Relative Importance and Interrelations between Behavior Parameters for Robots’ Mood Expression. In Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII) (pp. 558-563). IEEE.
https://doi.org/10.1109/ACII.2013.98
[51] Youssef, A. B., Chollet, M., Jones, H., Sabouret, N., Pelachaud, C., & Ochs, M. (2015). Towards a Socially Adaptive Virtual Agent. In W. P., Brinkman, J., Broekens, & D. Heylen (Eds.), Intelligent Virtual Agents (pp. 3-16). Springer.
https://doi.org/10.1007/978-3-319-21996-7_1