口译多模态加工中的手势认知效应研究
Investigating the Cognitive Effects of the Interpreter’s Gestures in Multi-Modal Processing
DOI: 10.12677/ML.2021.94123, PDF, HTML, XML, 下载: 331  浏览: 624  科研立项经费支持
作者: 胡敏霞*, 伍 鑫:四川大学外国语学院,四川 成都
关键词: 译员手势认知加工多模态分析ELAN软件Interpreter’s Gestures Cognitive Processing Multimodal Analysis ELAN Software
摘要: 本文使用ELAN多模态话语分析软件对两段中国导演海外影片宣传的口译活动进行了分析。结果发现手势是口译加工中重要的显身特征、信息模态和认知资源,具体来说:1) 认知负荷更大时,手势数量更多,手势位置更高,手势更加快速而频繁,但主要手势类型都是节奏性手势;2) 口译认知加工中的手势、语言和副语言产品具有跨模态统一性;3) 口译质量更高的译员更倾向于模仿讲者手势,但手势类型可能发生变化。同时,手势数量也可能受到现场语境因素以及译员认知资源的影响。本研究构建的初步框架和分析工具有助于弥补口译手势多模态研究的不足,相关结论亦可促进基于询证的口译教学。
Abstract: This research uses a multi-modal discourse analysis software, ELAN, to analyse two inter-preter-mediated events featuring Chinese film directors promoting their films to an international audience. It is found that gestures represent an important embodied information modality and cognitive resource in interpreting processing. Specifically: 1) With higher cognitive load, more gestures with higher positions are made, with faster frequency and pace, although beats remain the dominant gesture type for all speakers, including interpreters; 2) Gestures align with the interpreter’s linguistic and paralinguistic products during the multimodal cognitive processing of interpreting; 3) Interpreters achieve higher interpreting quality scores through more imitation of speaker gestures, although the imitator may change the gesture type. Meanwhile, contextual factors and availability of the interpreter’s cognitive resources may also impact on the use of gestures. The preliminary framework and analysis tools built in this research could fill some of the gaps in understanding the multi-modal nature of interpreting gestures. The initial findings could also promote evidence-based interpreter education.
文章引用:胡敏霞, 伍鑫. 口译多模态加工中的手势认知效应研究[J]. 现代语言学, 2021, 9(4): 912-923. https://doi.org/10.12677/ML.2021.94123

1. 引言

手势(gesture)是指“说话中出现的任何动作”( [1], p.4),但本文主要讨论讲者讲话时的手部动作(hand gesture)。手势可分为三阶段:预备、比划和收回(prepare, stroke and retract) ( [2], p.208);或更细的五阶段:静止、预备、比划、保持和收回(rest, prepare, stroke, hold and retract) ( [3], p.54),但手势的基本单位和核心阶段是“比划” [4] [5]。McNeill将手势分为四大类:1) 图示性手势(iconics),是对具体事物或动作的手势描述;2) 隐喻性手势(metaphorics),是对抽象概念或想法的手势描述;3) 指向性手势(deictics),是对空间点或是时间点的手势指向;4) 节奏性手势(beats),是基于话语重音的手势节奏( [5], p.76);另外还有5) 适应性手势(adaptors),是紧张时摸鼻子或抓头发等手势动作 [3]。

手势是思维叙事的形象表征 [6]。手势可打开“不易被语言表达的思维窗户”( [7], p.327);手势是被讲者主动赋予了意义的符号( [5], p.105);手势既是有意识也是自动化的行为( [8], p.90)。手势有助于管理工作记忆负荷 [9] [10],工作记忆能力较低的人会更多使用伴语手势(co-speech gestures) [11] [12] [13],工作记忆能力更强的人更易理解手势意义 [14] [15]。

手势和语言具有统一性 [4] [16]。手势与讲话内容一致的情况下,可减少语言的加工负荷 [17],有助于话语生产和组织 [18] [19] [20] [21],有助于信息理解(见综述 [22] ),有助于学习和记忆 [23] [24],也有助于解决问题 [25] [26],观察节奏性手势有助于改善学生的外语口音 [27]。因此,具有跨模态统一性的伴语手势对讲者和观众都有益 [22] [28]。

手势是口译认知加工的重要认知资源。 [29] 发现同声传译(“同传”)译员在不确定性较高的答问环节中,手势数量会增加,手势位置也会升高。 [30] 发现看不见讲者手势会让同传译员陷入焦虑,导致译员需要努力集中注意力,手势可辅助译员理解和甄别重点信息。 [31] 发现译员在认知负荷较高的同传过程中会频繁打手势。 [32] 发现讲者和译员手势有大量重叠,讲者和译员都会在列举时打出节奏性手势。 [33] 指出手势能促进口译质量,降低同传认知负荷,且译员手势会随着原语输入内容的变化而变化。 [34] 发现尽管列举和数字对于同传译员来说都是高负荷信息,但译员手势更多的时候是列举。在交替传译(“交传”)的工作模式中, [35] 发现医疗译员的手势可以补充问诊中医患双方缺失的信息。

综上,目前口译手势的实证研究仍相对有限,主要针对同传和非中英语对。鉴于此,本研究将使用ELAN多模态软件对包含清晰手势图像的中/英交传语段进行试点分析,目标语段中包含暗喻类信息,因为这类信息的口译认知负荷较大 [36],而且使用手势概率较大 [37]。本研究主要回答以下问题:1) 讲者和译员的主要手势特征是什么?2) 手势与语言及副语言产品之间是否存在多模态统一性?3) 手势在口译过程中发挥怎样的认知功能?

2. 口译员手势的多模态分析

2.1. 材料概述

[38] 指出,口译多模态语料库的语料来源主要包括三个渠道:1) 电视或网络媒体;2) 国际组织官网;3) 国际会议。前两种语料属于公开资源,用作学术研究不存在知识产权的问题。本研究的所有语料均来自第一个渠道,材料基本情况如下:

视频1:选自张艺谋导演(“讲者1”)2018年9月10日在加拿大多伦多电影节参加《影》(Shadow)北美首映式后的问答环节。参与者共三人:张艺谋、主持人卡梅隆·贝利(Cameron Bailey)及译员1。截取时长为3分17秒(见图1)。

视频2:选自刁亦男导演(“讲者2”)2019年9月29日在美国第57届纽约电影节参加电影《南方车站的聚会》(The Wild Goose Lake)美国首映式后的对话访谈。参与者共三人:刁亦男、主持人伊芙·加贝罗(Eve Gabereau)及译员2。截取时长为4分34秒(见图2)。

2.2. 译员背景

译员1是张鑫,女,口译方向是汉译英,译入外语(A-B),于2010年考入上海外国语大学高级翻译学院会议口译专业,于2013年起开始担任加拿大约克大学Glendon学院会议口译系的全职讲师,为国际会议口译员协会AIIC会员,拥有丰富的交替传译实践经验(译员背景信息来源:https://www.linkedin.com/in/emmaxz/)。

Figure 1. Photo of Speaker 1 (Yimou Zhang, first from left) and Interpreter 1 (first from right)

图1. 讲者1 (张艺谋)与译员1的口译现场照片

Figure 2. Photo of Speaker 2 (Yinan Diao, second from left) and Interpreter 2 (first from right)

图2. 讲者2 (刁亦男)与译员2的口译现场照片

译员2是Tzu-Wen Cheng (郑子文),男,口译方向是汉译英,译入外语(A-B),是纽约城市大学曼哈顿社区学院副教授,曾为贾樟柯、蔡明亮、侯孝贤、刁亦男等导演赴美演艺交流活动提供口译服务(译员背景信息来源:https://www.bmcc.cuny.edu/faculty/vincent-cheng/)。

2.3. 数据分析

本研究借鉴 [34] 研究中使用的马普心理语言学研究所(Max Planck Institute for Psycholinguistics)开发的免费开源软件ELAN (6.0版本)对视频语料进行可视化多模态分析,预处理环节包括分层、转写、标注和对齐(见图3图4)。

1) 分层:在ELAN中建立四个分析层级:导演字幕层、译员字幕层、导演手势层和译员手势层。先转写导演字幕层和译员字幕层,然后在导演手势层标注导演手势编码,以及在译员手势层标注译员手势编码。

2) 转写:首先,利用网易见外转录工具包(https://jianwai.youdao.com)生成字幕并嵌入ELAN字幕层,人工校对后生成转录文本。然后,利用格式工厂软件(http://www.pcgeshi.com/)将目标视频格式转为WAV音频格式(WAV文件储存的是声音波形的二进制数据,便于副语言特征分析)。随后,将目标视频和音频文件一并导入ELAN软件。最后,采用客观描写法,对副语言产品加以自然记录,对重复和语法不规范甚至错误等现象不作更正( [39], p.96)。转写的副语言产品包括:有声停顿(filled pauses,如“呃”、“啊,”用***表示)、无声停顿(silent pauses,见 [40],时长超过0.250秒的无声停顿,用……表示)、重新表述(reformulation,即改口,用&&表示)和自我重复(repetition,用^^表示)。

3) 标注:首先,用G + 数字表示手势的出场顺序;然后对五种手势类型进行编码:MG为隐喻性手势,ICG为图示性手势,BTS为节奏性手势,DG为指向性手势,AG为适应性手势;在手势类型编码之后的是与发言人手势共生的语言或副语言产品。例如,“G40 + ICG + 拉开这个门”表示:这是第40个手势,属于图示性手势,与“拉开这个门”的语言信息重合。三名独立编码员分别进行编码,如有争议,进行全体讨论,确保结果完全一致。

4) 对齐:从上到下分层将原语和译语在时间轴上按意群对齐。

Figure 3. ELAN layers, transcriptions, notations and alignments of Video 1

图3. 视频1的ELAN分层、转写、标注和对齐

Figure 4. ELAN layers, transcriptions, notations and alignments of Video 2

图4. 视频2的ELAN分层、转写、标注和对齐

3. 结果

3.1. 语言和副语言产品分析

视频1

主要隐喻:“用雨水,用中国的阴阳的关系来打仗”;“像女人一样地扭,你才可以赢”。

讲者1:呃***,孙俪其实是个非常好的演员。呃***,在这个电影中,因为那个演邓超,呃***,不是&&,演演^^那个子虞和境州的,这是她的丈夫邓超,所以他们是夫妻两个演。其实我觉得在演电影的过程中……,她……孙俪&&可能会很分裂吧。呃***,嗖***,一……妻子要跟两个丈夫演戏。呃***,两个丈夫……,好像还是情敌。嘿,这个这个^^也挺复杂的啊。嗖***,呃***,但是她们夫妻两个都很好,演得非常好。其实,这个电影,你看起来像是一个男人的电影……,啧***:关于权谋……,啧***,关于争斗,关于生存。但我实际认为这个女性的角色非常重要。我们看电影的最后一个画面:孙俪要不要拉开这个门?拉开门要不要说出真相?很困难的一个选择。我们还看,在电影中……,我们用雨水……,用中国的阴阳的关系来打仗。但是,是孙俪点破了这个打仗的方法。她说:“你必须……像女人一样地扭……,你才可以赢”。我们看到这个电影中…,其实女性的角色非常重要。她是关键的。每次最关键的时候,都必须是她……出现……来……呃***推动故事的发展。

译员1:Ah*** Sun Li is definitely a ah*** outstanding actress. And ah*** she and Deng Chao are a real-life couple…, and they in this film po&&also portrayed a couple. Her role is very complicated, because she needs&& she is caught between…two husbands. And so, she's constantly…, ah***, facing… and… ah***, s...choices…as as ^^ well as ah *** tensions between the two husbands, because as you can see, there is a little bit of animosity between the two husbands. And the film, it seems that it’s all about power struggle…of men…, of royalties and about survival. However, in the film, ah***, female characters played an… extremely important role. And… at the end of the film, in the final scene…, will…, ah***, Sun Li open the door or not, and will she tell the truth… or not? So, these are all… tough choices that she has to make. Also in the film, you see a lot of rain, and the rain and water plays an important role in the film. And… the Yin and Yang philosophy… o-of ^^ China is also very well… ah*** portrayed in the film. And… her character played a key role in ah *** finding a way to counter the saber techniques of their opponent warrior. So, she designed these ah *** these ^^ feminine, ah***, feminine ^^ steps&& or postures… to (eh***) counter the really masculine, ah *** saber welding techniques. So, overall, she played ah*** an extremely important role in this storyline.

视频2

主要隐喻:“夜”,“像滤镜”、“像一层纱”、“像舞台”、“像……空间”。

讲者2:嗯***,摄影师是我第一部片的摄影,一直合作到现在,我们……嘶***,已经合作了&&,从02年开始合作到今天……,大概有10 && 17年。呃***,彼此都非常熟悉和了解。呃***,我的剧本写出来以后也先……先^^让他看。呃***,所以……但&&在一些沟通上和审美趣味上,我们……不用费什么精力。呃***,主要是……怎么去实施,怎么去呈现。可以说……,呃***,我是他的左眼,他是我的右眼。他也会提醒我……这场戏的表演。他也会关心表演。我也会关心摄影。呃***,相辅相成,都是一些正向的,互相帮助地往前发展。呃***,至于你说的夜……,那夜,我觉得这个逃犯,他……一定是在夜晚像一个动物一样被猎&&,被被^^围捕,被被^^追猎。呃***,他白天不会出来。所以我们大量的,85%的夜景……都是根据剧情……要求。没有办法,呃***,只能是用夜晚来拍摄。那夜也有它的魅力,它就像一个滤镜,或者像一……一一^^层纱一样……,把纵深的&&,白天看到的纵深的那些写实的东西过滤掉了。所以,你们看起来,它更像一个舞台……,或者像一个抽象空间……,像一个纯粹的……,呃***,反映时间在这里面流逝的一个空间。呃***,这就是夜给我们提供的最大的……,技术的,呃***,心理的……,审美的……一个保障。呵呵呵呵……。

译员2:I will try (grin)…So, ah***, in terms of the…the^^ cinematographer I've been working with him since ah*** my first film in 2002, so we have been working together for seventeen years up to this point. So, we have really good… working collaboration&&, ah***, agreement ah*** in a way that we understand each other. Ah***, we really work well together, ah***, on the level of aesthetics. We share the same aesthetics, so there's not much explanation needed ah*** when we collaborate. And when I finish my script, I also show my script to, ah***, Dong jinsong, which is a cinematographer first, so that we can have movie on the same page… literally speaking. So, I, I ^^ tend to think that for us, the collaboration is not on the ah***, conceptual or creative level. It’s very much about the the ^^ execution. How how^^ can we work together to… e-execute ^^ what we&&…I have put down, ah***, in my script. And I, I ^^ see&&. We joke about this. It's almost as if that I’m the left eyes, and he is the right eyes…, and together we get to see things… in a more complete manner that we actually complement each other very well. Sometimes, he's actually going to…&&will collaborate with me in terms of the acting side or the directing side of the, ah***, the shooting&&, the shoot. And also…, he will, in turn, ah *** also&&, I'm sorry, I then, will, in turn, ah *** give some com-&& feedback, feedback ^^ and comments about the, ah *** the cinematography and how I want it ah*** shot&&, to be set up. So, ah***, the second part of your question about the night scenes, I do think that since this particular story is about a character on the run, a fugitive…, so I really want to somehow position him in this… dark si- &&, not only the underbelly of society, but also literally… in ah*** at && night. And…, almost like that, he become the animal to be hunted. And ah***, I really want to somehow use this particular characteristics of night scenes, because if you think about night, because of lighting…, it almost as if you give a filter, ah***, to the camera, and also almost pre-&&, you create a screen f-&& to filter out things that you don't want to present in the films. And by using this particular night scene or the filter that I just mentioned, you tend to change and transform the three dimensional, ah***, images into a two dimensional one…, and then make it very &&, almost as if it is onstage and have become very, very abstract. And… with this particular abstraction, you get to really, ah***, examine and develop and explore this concept of time, ah***, through that two-dimensional abstraction. And that's the reason why I love to use, ah***, night scenes and then shot this film in particular 85% of them… at night.

口译实证研究发现非流利性(disfluency)和加工时间(processing time)等副语言特征是认知负荷的重要标志 [41] [42] [43]。认知负荷(Cognitive Load, CL)既指任务施加在执行者身上的加工负荷,又指执行者在任务中主动投入的精力 [44],在口译研究中又称外部的“输入负荷”和主体的“认知努力” [45] [46]。本研究将采用总体负荷和平均负荷来表征认知负荷的强度 [47]:讲者的发言字数越多,发言时间越长,表明输出给译员的总体负荷越大;而讲者的平均语速越快,表明输出给译员的平均负荷越大;同时,译员的发言字数越多、发言时间越长、平均语速越快,则表明译员自身的认知努力越大。

表1显示了讲者和译员的副语言特征。其中,译员2被输入的总体负荷更大(讲者2375字,104.87秒),但译员1被输入的平均负荷更大(讲者1,平均语速为232字/分钟)。同时,译员2的认知努力更大(167.98秒,491个单词,平均语速为175词/分钟),译员1的认知努力更小(109.62秒,246个单词,平均语速为135词/分钟)。

在非流利性方面,表1显示:讲者2输出给译员2的非流利性更多(总共44次,平均0.42次/秒)。从译员自身的总体非流利性来看,译员2比译员1更多(61次 > 41次);在平均非流利性方面,译员1和译员2接近,译员1略多(0.37次/秒 > 0.36次/秒)。但是,两名译员的总体非流利性现象都高于他们的讲者,可能与两位译员比讲者更长的发言时间有关,说明译员的总体负荷高于讲者;在平均非流利性方面,译员1高于讲者(10.37次/秒 > 0.34次/秒),译员2低于讲者2 (0.36次/秒 < 0.42次/秒)。

Table 1. Paralinguistic features of speakers and interpreters

表1. 讲者和译员的副语言特征

3.2. 口译质量评估

我们从口译时长比率(10%)、非流利性比率(10%)以及错译、漏译和不妥翻译(80%, Errors, Omissions and Infelicities, EOI) [48] 三个维度进行了口译质量评估和打分(见表2)。

译员1的口译质量评分:90分。首先,译员1的口译效率高于译员2 (口译时长比率越小,口译效率越高:1.28 < 1.60);其次,虽然译员1的非流利性比率略高于译员2 (1.45 > 1.39),但非流畅性总体数量低于译员2 (41次 < 61次),最后,译员的忠实度较高,仅仅漏译了两处细节信息和有一处翻译偏移,即“打仗的方法”显化为“to counter the saber techniques of their opponent warrior (化解对方武士的刀法)”,语法错误较少(详见表2)。

译员2的口译质量评分:83分。首先,译员2的口译效率低于译员1,总体非流畅性较高(例如:“Sometimes, he’s actually going to…&& will collaborate with me in terms of the acting side or the directing side of the, ah***, the shooting&&, the shoot. And also…, he will, in turn, ah *** also&&, I'm sorry, I then, will, in turn, ah *** give some com-&& feedback, feedback ^^ and comments about the, ah *** the cinematography and how I want it ah*** shot&&, to be set up.”)。最重要的是,译员2的错译、漏译和欠妥翻译较多,忠实度偏低。例如,出现了较多的漏译(5处),偏移和显化较为明显(详见表2)。

Table 2. Assessment of interpreting quality

表2. 口译质量评估

3.3. 手势分析

就手势总数来看,表3显示:讲者1大于讲者2 (64 > 53),但是译员2远大于译员1 (164 > 73);同时,两位译员的手势都大于他们的讲者,译员1和讲者1的手势总数差异较小(73 > 64),而译员2的手势总数则是讲者2的近3倍(164 > 53),说明译员(尤其是译员2)希望通过调用更多的手势资源来消解接近饱和的认知负荷,这与副语言特征数据一致。

就手势类型来看,不管是讲者还是译员,手势主要类型都是节奏性手势,占比在70%~85%之间,两名译员的节奏性手势占比都高于他们的讲者。所有讲者和译员都发出了隐喻性、指向性和图示性手势,但只有讲者1和译员2使用了适应性手势,尤其是译员2,共出现了7次缓和心理紧张的调节性手势(见表3)。

就手势位置来看,讲者1和译员2分别有41个和25个高位手势(胸及以上位置)。译员1和讲者2的高位手势较少,分别是3个和5个。主要因素包括演讲时是站着还是坐着,是否需要记笔记,以及整体认知负荷的多少。站着演讲的讲者1使用的高位手势明显多于坐着的讲者2;而译员1需要记笔记,所以虽然站着但高位手势也很少;最后,译员2承受的认知负荷大于译员1,所以高位手势也更多(见表3)。

就手势频率来看,译员2的手势频率都高于讲者2的手势频率(0.98次/秒 > 0.51次/秒)和译员1的手势频率(0.98次/秒 > 0.67次/秒),再次说明译员2试图通过积极调动手势资源来应对较高的认知负荷。就手势时长来看,讲者1和译员1的手势更加从容而缓慢,而讲者2和译员2的手势则更加快速而频繁(见表3)。结合前面的语言和副语言分析,说明认知负荷更高时,手势平均时长更短,手势可能更加快速而频繁(见表3)。

就手势模仿来看,译员1模仿了讲者1手势的43.06%,译员2模仿了讲者2手势的37.74%。译员1对讲者1的手势模仿度高于译员2对讲者2的手势模仿度(43.06% > 37.74%)。结合前面的口译质量评分,似乎说明更多地模仿讲者手势有助于提升口译质量(见表3)。同时,手势模仿并不等同于手势类型不变。例如,讲者的隐喻性手势可能被译员模仿,但译员可能不再使用隐喻性手势,而是使用节奏性手势。

Table 3. Numbers and classifications of speaker and interpreter gestures

表3. 译员和讲者的手势特征

4. 讨论

综上,对包含隐喻类信息的现场口译手势多模态分析发现:1) 讲者手势和译员手势都与认知负荷息息相关:认知负荷更大时,手势数量更多,手势位置更高,手势更加快速而频繁,但手势类型主要都是节奏性手势;2) 口译质量分数更高的译员,更倾向于模仿讲者手势,但模仿后的手势类型可能发生变化;3) 手势、语言和副语言产品在口译认知加工中具有跨模态统一性,一道映射出了讲者向译员输出的认知负荷以及译员自身的认知努力。同时,手势数量也可能受到现场语境因素(是否站立和是否记笔记等)以及译员认知资源的影响。

本文对口译手势的多模态分析结论与前期结论发现基本一致。前期研究发现手势有助于口译认知加工,负荷较高时,译员手势数量会增加,频率会加快,手势位置也会升高,而且译员和讲者的手势存在相似性 [29] - [35]。再者,手势资源可以辅助语言理解、组织、生产等认知过程,尤其是补充译员有限的工作记忆资源,也与前期研究结论一致 [9] [10] [18] [19] [20] [21] [23] [24]。同时本研究进一步发现,在处理隐喻类信息时,节奏性手势的数量也多于隐喻性或图示性手势,而且译员的节奏性手势占比高于讲者,这可能是因为节奏性手势的使用成本更低,更适合认知负荷更高的译员。

5. 结语

本研究在国内首次使用ELAN软件对现场交传中的口译手势进行了多模态综合式研究。发现讲者和译员打手势的现象普遍存在;讲者和译员的主要手势类型都是节奏性手势;手势具有辅助认知加工、凸显语义、衔接语篇和调适心理等作用;认知负荷更高的译员,手势数量更多,手势位置更高;讲者的手势会被译员还原和模仿,但译员手势也映射自身认知努力和策略。在口译教学中,教师可让学生意识到手势是重要认知资源,平衡手势与其他口译技巧和策略的使用。在未来研究中,研究者可以通过进一步扩大研究样本来验证本文初步结论。

基金项目

本文是中央高校基本科研业务费项目“同声传译认知负荷的减荷策略研究”(2019skzx-pt211)和“同传译前准备策略研究(2019自研-外语10)”成果之一。

参考文献

参考文献

[1] Feyereisen, P. and De Lannoy, J. (1991) Gestures and Speech: Psychological Investigations. Cambridge University Press, Cambridge.
[2] Kendon, A. (1980) Gesticulation and Speech: Two Aspects of the Process of Utterance. In: Key, M.R., Ed., The Relationship of Verbal and Nonverbal Communication, Mouton, Hague, The Netherlands, 208-227.
https://doi.org/10.1515/9783110813098.207
[3] Bressem, J. and Ladewig, S. (2011) Rethinking Gesture Phases: Articulatory Features of Gestural Movement. Semiotica, 2011, 53-91.
https://doi.org/10.1515/semi.2011.022
[4] Kendon, A. (2004) Gesture: Visible Action as Utterance. Cambridge University Press, Cambridge.
https://doi.org/10.1017/CBO9780511807572
[5] McNeill, D. (1992) Hand and Mind: What Gestures Reveal about Thought. University of Chicago Press, Chicago.
[6] Bruner, J.S. (1990) Acts of Meaning. Harvard University Press, Cambridge.
[7] Alibali, M., Bassok, M., Solomon, K., Syc, S. and Goldin-Meadow, S. (1999) Illuminating Mental Representations through Speech and Gesture. Psychological Science, 10, 327-333.
https://doi.org/10.1111/1467-9280.00163
[8] McNeill, D. (2005) Gestures and Thought. University of Chicago Press, Chicago.
[9] Goldin-Meadow, S. and Wagner, S.M. (2005) How Our Hands Help Us Learn. Trends in Cognitive Sciences, 9, 234-241.
https://doi.org/10.1016/j.tics.2005.03.006
[10] Goldin-Meadow, S. (2011) Learning through Gesture. Wiley Interdisciplinary Reviews: Cognitive Science, 2, 595-607.
https://doi.org/10.1002/wcs.132
[11] Chu, M., Meyer, A., Foulkes, L. and Kita, S. (2014) Individual Differences in Frequency and Saliency of Speech-Accompanying Gestures: The Role of Cognitive Abilities and Empathy. Journal of Experimental Psychology: General, 143, 694-709.
https://doi.org/10.1037/a0033861
[12] Gillespie, M., James, A.N., Federmeier, K.D. and Watson, D.G. (2014) Verbal Working Memory Predicts Co-speech Gesture: Evidence from Individual Differences. Cognition, 132, 174-180.
https://doi.org/10.1016/j.cognition.2014.03.012
[13] Pouw, W.T., Mavilidi, M.F., Van Gog, T. and Paas, F. (2016) Gesturing during Mental Problem Solving Reduces Eye Movements, Especially for Individuals with Lower Visual Working Memory Capacity. Cognitive Processing, 17, 269-277.
https://doi.org/10.1007/s10339-016-0757-6
[14] Wu, Y.C. and Coulson, S. (2014) Co-Speech Iconic Gestures and Visuo-spatial Working Memory. Acta psychologica, 153, 39-50.
https://doi.org/10.1016/j.actpsy.2014.09.002
[15] Özer, D. and Göksun, T. (2020) Visual-Spatial and Verbal Abilities Differentially Affect Processing of Gestural vs. Spoken Expressions. Language, Cognition and Neuroscience, 35, 896-914.
https://doi.org/10.1080/23273798.2019.1703016
[16] Kendon, A. (1972) Some Relationships between Body Motion and Speech. In: Seigman, A. and Pope, B., Eds., Studies in Dyadic Communication, Pergamon Press, Elmsford, 177-216.
https://doi.org/10.1016/B978-0-08-015867-9.50013-7
[17] Skipper, J., Goldin-Meadow, S., Nusbaum, H. and Small, S. (2007) Speech-Associated Gestures, Broca’s Area, and the Human Mirror System. Brain and Language, 101, 260-277.
https://doi.org/10.1016/j.bandl.2007.02.008
[18] Hostetter, A.B., Alibali, M.W. and Kita, S. (2007) I See It in My Hands’ Eye: Representational Gestures Reflect Conceptual Demands. Language and Cognitive Processes, 22, 313-336.
https://doi.org/10.1080/01690960600632812
[19] Jenkins, T., Coppola, M. and Coelho, C. (2017) Effects of Gesture Restriction on Quality of Narrative Production. Gesture, 16, 416-431.
https://doi.org/10.1075/gest.00003.jen
[20] Rauscher, F.H., Krauss, R.M. and Chen, Y. (1996) Gesture, Speech, and Lexical Access: The Role of Lexical Movements in Speech Production. Psychological Science, 7, 226-231.
https://doi.org/10.1111/j.1467-9280.1996.tb00364.x
[21] Morsella, E. and Krauss, R.M. (2004) The Role of Gestures in Spatial Working Memory and Speech. The American Journal of Psychology, 117, 411-424.
https://doi.org/10.2307/4149008
[22] Dargue, N., Sweller, N. and Jones, M.P. (2019) When Our Hands Help Us Understand: A Meta-analysis into the Effects of Gesture on Comprehension. Psychological Bulletin, 145, 765-784.
https://doi.org/10.1037/bul0000202
[23] Goldin-Meadow, S., Cook, S.W. and Mitchell, Z.A. (2009) Gesturing Gives Children New Ideas about Math. Psychological Science, 20, 267-272.
https://doi.org/10.1111/j.1467-9280.2009.02297.x
[24] Stieff, M., Lira, M.E. and Scopelitis, S.A. (2016) Gesture Supports Spatial Thinking in STEM. Cognition and Instruction, 34, 80-99.
https://doi.org/10.1080/07370008.2016.1145122
[25] Chu, M. and Kita, S. (2011) The Nature of Gestures’ Beneficial Role in Spatial Problem Solving. Journal of Experimental Psychology: General, 140, 102-116.
https://doi.org/10.1037/a0021790
[26] Eielts, C., Pouw, W., Ouwehand, K., Van Gog, T., Zwaan, R.A. and Paas, F. (2020) Co-Thought Gesturing Supports More Complex Problem Solving in Subjects with Lower Visual Working-memory Capacity. Psychological Research, 84, 502-513.
https://doi.org/10.1007/s00426-018-1065-9
[27] Gluhareva, D and Prieto P. (2017) Training with Rhythmic Beat Gestures Benefits L2 Pronunciation in Discourse-Demanding Situations. Language Teaching Research, 21, 609-631.
https://doi.org/10.1177/1362168816651463
[28] Novack, M. and Goldin-Meadow, S. (2015) Learning from Gesture: How Our Hands Change Our Minds. Educational Psychology Review, 27, 405-412.
https://doi.org/10.1007/s10648-015-9325-3
[29] Furuyama, N., Nobe, S., Someya, Y., Sekine, K. and Hayashi, S. (2005) A Study on Gestures in Simultaneous Interpreters. Interpretation Studies, 5, 111-136.
[30] Rennert, S. (2008) Visual Input in Simultaneous Interpreting. Meta, 53, 204-217.
https://doi.org/10.7202/017983ar
[31] Zagar-Galvão, E. (2009) Speech and Gesture in the Booth—A Descriptive Approach to Multimodality in Simultaneous Interpreting. In: De Crom, D., Ed., Selected Papers of the CETRA Research Seminar in Translation Studies 2008, CETRA, Leuven.
[32] Zagar-Galvão, E. (2013) Hand Gestures and Speech Production in the Booth: Do Simultaneous Interpreters Imitate the Speaker. In: Carapinha, C. and Santos, I., Eds., Estudos de Linguística, Coimbra University Press, Coimbra, 115-129.
https://doi.org/10.14195/978-989-26-0714-6_7
[33] Zagar-Galvão, E. (2020) Gesture Functions and Gestural Style in Simultaneous Interpreting. In: Salaets, H. and Brône, G., Eds., Linking up with Video: Perspectives on Interpreting Practice and Research, John Benjamins Publishing Company, Amsterdam/Philadelphia, 151-179.
https://doi.org/10.1075/btl.149.07gal
[34] Stachowiak-Szymczak, K. (2019) Eye Movements and Gestures in Simultaneous and Consecutive Interpreting. Springer International Publishing, Cham.
https://doi.org/10.1007/978-3-030-19443-7
[35] Gerwing, J. and Li, S. (2019) Body-Oriented Gestures as a Practitioner’s Window into Interpreted Communication. Social Science & Medicine, 233, 171-180.
https://doi.org/10.1016/j.socscimed.2019.05.040
[36] Gile, D. (2009) Basic Concepts and Models for Interpreter and Translator Training. Revised Edition, John Benjamins Publishing Company, Amsterdam.
https://doi.org/10.1075/btl.8
[37] Cienki, A. and Müller, C. (2008) Metaphor and Gesture. John Benjamins, Amsterdam.
https://doi.org/10.1075/gs.3
[38] 齐涛云, 杨承淑. 多模态同传语料库的开发与建置——以职业译员英汉双向同传语料库为例. 中国翻译, 2020, 41(3): 126-135, 189.
[39] 王斌华. 语料库口译研究——口译产品研究方法的突破. 中国外语, 2012(3): 96-102.
[40] Goldman-Eisler, F. (1958) Speech Production and the Predictability of Words in Context. The Quarterly Journal of Experimental Psychology, 10, 96-109.
https://doi.org/10.1080/17470215808416261
[41] Plevoets, K. and Defrancq, B. (2016) The Effect of Informational Load on Disfluencies in Interpreting. Translation and Interpreting Studies: The Journal of the American Translation and Interpreting Studies Association, 11, 202-224.
https://doi.org/10.1075/tis.11.2.04ple
[42] Plevoets, K. and Defrancq, B. (2018) The Cognitive Load of Interpreters in the European Parliament: A Corpus-based Study of Predictors for the Disfluency uh(m). Interpreting, 20, 1-28.
https://doi.org/10.1075/intp.00001.ple
[43] Xiang, X., Zheng, B. and Feng, D. (2020) Interpreting Impoliteness and Over-politeness: An Investigation into Interpreters’ Cognitive Effort, Coping Strategies and Their Effects. Journal of Pragmatics, 169, 231-244.
https://doi.org/10.1016/j.pragma.2020.09.021
[44] Paas, F. and Sweller, J. (2012) An Evolutionary Upgrade of Cognitive Load Theory: Using the Human Motor System and Collaboration to Support the Learning of Complex Cognitive Tasks. Educational Psychology Review, 24, 27-45.
https://doi.org/10.1007/s10648-011-9179-2
[45] Gile, D. (1999) Testing the Effort Models’ Tightrope Hypothesis in Simultaneous Interpreting—A Contribution. Hermes, 12, 153-172.
https://doi.org/10.7146/hjlcb.v12i23.25553
[46] Gile, D. (2008) Local Cognitive Load in Simultaneous Interpreting and Its Implications for Empirical Research. Forum, 6, 59-77.
https://doi.org/10.1075/forum.6.2.04gil
[47] Xie, B. and Salvendy, G. (2000) Prediction of Mental Workload in Single and Multiple Task Environments. International Journal of Cognitive Ergonomics, 4, 213-242.
https://doi.org/10.1207/S15327566IJCE0403_3
[48] Gile, D. (2011) Errors, Omissions and Infelicities in Broadcast Interpreting. Preliminary Findings from a Case Study. In: Alvstad, C., Hild, A. and Tiselius, E., Eds., Methods and Strategies of Process Research. Integrative Approaches in Translation Studies, John Benjamins, Amsterdam/Philadelphia, 201-218.