译员大脑探索之旅(二)了解认知负荷——为什么同声传译对认知能力要求很高?


发表于2021年5月20日   
Part 2 of 了解认知负荷——为什么同声传译对认知能力要求很

与其他双语者一样,会议口译员不能简单地屏蔽其中一种语言,这样做是没有任何好处的,因为译员经常需要让至少两种语言处于高度激活状态。那么,他们如何避免思维混乱,并调动必要的认知资源,来应对如此复杂的任务所产生的负荷?

我们对口译员大脑的探索,就从双语者的大脑开始吧。按照定义,所有的口译员都是双语者,当然也有多语者。这里讲的的双语包括口语和手语,甚至有证据表明,方言等语言变体,在双语者的大脑中也是按照类似于正式语言的方式处理的。

从这一点来讲,口译员并不特殊。全球约有60%的人口能讲两种或两种以上语言,但我们应该记住,双语者的语言能力不能等同于两个单语者的能力。/p>

语言的使用是高度依赖语境的,即便对双语者来说,用两种(或多种)语言来体验他们生活中的每一件事情也是非常不自然的。相反,他们可能在做某些事情时习惯使用其中一种语言,做其它事情时习惯使用另一种语言。例如,一个双语者可能会更喜欢用法语谈论食谱,而谈足球时更习惯用英语——说实话,生活中更复杂的情况远不止这些。. 


两种或多种语言在大脑中共存,也许真的会产生一定的影响。

尽管将人脑比作计算机或许并不恰当,不过心理学家几十年前提出的这个比喻还是很容易说明问题的。用计算机的术语来说,最主要的因素大概不是硬盘(相当于人脑的长期记忆)的可用空间,因为一个受过教育的英文母语人士可以毫不费力地存储大约四万个单词,而且每学一种新的语言,又可以多记几千个单词。

问题的关键应该在于内存(人类工作记忆)的容量,这部分空间可以临时存储数据,以便进一步加工。虽然在许多方面,人脑仍然比最先进的超级计算机性能高出几个数量级(至少目前是这样),几百个神经元之间的传输就能轻松搞定计算机需要几百万步完成的操作,但人类的工作记忆(有时被理解为可以一次性储存在大脑中的信息量)与计算机相比的确有限。

虽然工作记忆的容量很难准确量化,但不同的说法认为,人脑能够保持激活或暂时储存的离散信息确实不多。因此,认知负荷的概念在很大程度上描述的正是这种有限的处理能力,因为这一概念描述的是特定任务对可用心理资源的需求。

我们用一个非常简单的公式就可以讲清楚:只要我们能够调动充足的资源,系统就能应付,大脑就能执行任务。

simple sum

现实状况要稍微复杂一些。事实表明,尽管一些基本资源可以在不同的任务之间分配,但原则上,资源的分配在很大程度上取决于任务的类型和性质。因此,口译员可能无法简单地将用于一件事的所有精力重新分配到另一件事情上。

这有点类似于夏季烧烤聚会,你的客人需要戴着口罩、保持社交距离,他们只有喝到自己喜欢的饮料才会开心:有人喝啤酒,有人喝白葡萄酒,还有人喝红葡萄酒(经过一整年的封锁和出行限制,可以假设大多数之前从不喝酒的人或多或少会喝一点)。只要每一种酒都足够多,每个人都会得到满足。不过,如果啤酒喝完了,不管冰柜里还剩下多少红葡萄酒或白葡萄酒,某位爱喝啤酒的叔叔也会大发雷霆。


有一个理论经过充分实验得到了证实。该理论假设,处理信息的过程中,不同的阶段、信息模态和编码过程会调用不同的认知资源。因此,各项任务会调用各自专属的认知资源,不会干扰其他任务,很容易共同完成。

我们可以利用“冲突矩阵”模型,预测、描述几十个资源彼此之间相互干扰的状况,可以得到类似这样的结论:语言理解这项任务很可能会干扰同时进行的语言输入处理任务,而不太可能干扰语言输出任务。同样,运动等空间反应任务,对认知、知觉或语言产生过程大概不会有什么影响。据说,有些早期的口译员在同传箱工作时,可以轻松地打毛衣、织围巾(当然,并不完全是女性),这也不难理解了。但与此同时,我们也会对口译员在翻译的同时做填字游戏的轶事深感怀疑,因为书面语言输入与听觉语言输入会产生较高的认知负荷,也会造成一定的干扰。



因此,边听边讲对大脑的影响似乎是微不足道的。

更为复杂的是,在双语者中,语言输入和输出,或者说与语言感知与产生有关的刺激,都需要与存储在他们神经网络中的对应语言产生联系,以避免“串台”。断开计算机中存储某种语言的硬盘是很容易的,不过双语者却不能简单地“关掉”一种语言。因此,尽管双语者更习惯用英语讨论欧洲杯决赛,期待对方讲英语并用英语交谈;但无论对方讲哪种语言,他们的大脑仍然会有意激活这种语言,同时抑制其它语言。

对数百万双语者来说,这样的控制是十分自然的。不过对某些双语者来说,情况却更加复杂。

与普通的双语者不同,这类双语者习惯要么用某种语言交谈,要么用某种语言聆听(有时他们会先用一种语言聆听,然后用另一种语言表达,也有可能中途改变语言),而同声传译员必须同时完成这两项任务,因为输入与输出在大约四分之三的时间内是重叠的。

 


普通听众也会有边听边讲(有时是发出无实际意义的音节)的经历,大多数人的工作记忆会受到很大影响,回忆所讲内容的能力会打折扣。这是因为他们受到“默读复诵”机制的影响,这一机制让大脑对听觉信息保持激活状态,以便进一步处理。

口译员不受这一机制的影响,或者说受到的影响要小很多,或许他们能对这一机制造成的损耗做出补偿,但其中的原因我们不得而知。然而,不断在听到输入语言信息的同时讲话,这种看似机械的挑战只是同声传译中认知负荷的冰山一角。输入语言的一些特征也会给译员带来挑战,情况也会变得更加复杂。

同声传译员每天要面对哪些认知方面的具体挑战,他们如何应对?这将是我们旅程的下一站要关注的内容。

Part 3 - A loaded question - Is all simultaneous interpreting equally demanding?
 

作者简介

Kilian Seeber

Kilian G. Seeber 瑞士日内瓦大学高级翻译学院副教授、副院长,会议口译硕士 (MACI) 与口译培训硕士(MASIT)课程主任,口译认知研究实验室(LaborInt)和口译与技术实验室(InTTech)首席研究员。于奥地利维也纳大学获得口笔译专业学士学位,于瑞士日内瓦大学口译获得专业硕士与博士学位,曾在英国约克大学从事心理语言学博士后研究。研究兴趣为复杂语言处理任务中的认知负荷与多模态加工,发表大量学术著作。(联系方式: kilian.seeber@unige.ch)


翻译:潘东鹏

(Translated by: PAN Dongpeng, UniGe)











Back to the AIIC Blog



Get a load of this: What makes simultaneous interpreting cognitively demanding?


20 May 2021   
Part 2 of A Journey into the Interpreter's Mind

Like other bilinguals, conference interpreters cannot simply switch off a language. On the contrary – any attempt to do so would be futile, seeing that they constantly need at least two languages to be highly activated. So how do they avoid getting their wires crossed and conjure up the cognitive resources necessary to absorb the load generated by such a complex task?

Our journey into the interpreter’s mind begins with the bilingual brain. All interpreters are by definition bilingual, if not multilingual. This bilingualism includes spoken and signed languages, and there is even evidence to suggest that language varieties, such as dialects, might be processed in a similar way to fully-fledged languages in the bilingual brain.

And we are far from alone. Around 60% of the global population speaks two or more languages, though we should bear in mind that the language capacity of one bilingual cannot be define as equal to that of two monolinguals.

The use of language is highly contextual and even for bilinguals it would be rather unnatural to experience every event in their lives in two (or more) languages. Instead, they might gravitate towards one language for some things and towards another language for others. For example, a bilingual might feel more comfortable sharing recipes in French yet be more at ease discussing football in English – and let’s be honest, there are worse things in life. 


The fact that two or more languages coexist in the brain does appear to have certain implications.

Inadequate as the comparison of the human brain to a computer might be, for the purpose of illustration, this metaphor introduced several decades ago by psychologists still comes in handy. In computing terms, it would appear that the principal issue is not the space available on the hard drive (in other words in our long-term memory), as an educated speaker of English can effortlessly store some 40,000 words and still add several thousand additional ones for each new language learned.

The crux seems to be the amount of RAM, or working memory, that is available to temporarily store data for further processing. While in many respects the human brain still outperforms state-of-the-art supercomputers by several orders of magnitude (for the time being at least), with a few hundred neuronal transmissions easily taking care of operations that take a computer a few million steps, it is also true that human working memory, sometimes understood as the amount of information that can be kept in mind at once, is relatively limited.

Although it is difficult to quantify the working memory’s capacity exactly, different accounts put it in the ballpark of a handful of discrete items that the human brain can keep activated or temporarily stored. The idea of cognitive load, therefore, is very much linked to this limited processing capacity, as it is a way to describe the demands a particular task places on the available mental resources.

A very simple equation, then, would posit that as long as effort can mobilize resources to match demand, the system will cope, and the brain will be able to perform the task.

simple sum

In reality, things are slightly more complex than that. Evidence suggests that while there might be some basic general resources that can be allocated across different tasks, in principle, resources are highly contingent on the type and nature of a task. Interpreters, therefore, might not be able to simply reroute all the mental resources devoted to one thing to another.

It is a bit like your summer BBQ, where your masked and socially distanced guests all require their beverage of choice to be happy: there will be those who drink beer, those who drink white and those who drink red (after a whole year of lockdowns and confinement it’s probably safe to assume that most teetotalers have caved). So long as you have enough of each going around, everyone is content. But when you run out of brewskis, your uncle Bob will throw a fit, no matter how much red or white is left in the ice chest.


One theory, borne out by comprehensive experimental evidence, assumes different cognitive resources for different processing stages, modalities and codes. That means that tasks can coexist more easily when they rely on their own dedicated resources and don’t interfere with other tasks.

Thanks to a so-called conflict matrix, we can predict and describe interference across dozens of resource pairs, suggesting, for example, that the comprehension of verbal input interferes more with the processing of verbal input than it does with the production of verbal output. Similarly, spatial responses such as motor tasks, seem to have little bearing on the execution of cognitive, perceptual or production tasks. This in turn, would explain the relative ease with which some early interpreters – apparently not exclusively women – allegedly knitted sweaters and scarves whilst working in the booth. At the same time, it would cast considerable doubt on recurring anecdotes of colleagues completing crossword puzzles whilst simultaneously interpreting, as the combination of a written verbal input with auditory verbal input would generate a substantial amount of load and interference.



The effect of listening and speaking at the same time on the human brain, therefore, does not seem to be inconsequential.

To complicate matters, in bilinguals, both input and output, in other words the language-related stimuli they perceive and produce, need to be linked to the correct language stored in their neural networks to avoid crosstalk. But while it would be easy to simply disconnect a computer’s hard drive on which a particular language is stored, bilinguals cannot simply switch off a language. Consequently, although our bilingual is much more likely to discuss the Cup Final in English, and therefore speak and expect to hear English, her brain still needs to deliberately activate the language currently in use whilst suppressing all others.

While this is par for the course for millions of bilinguals, for some bilinguals it is more complicated still.

Unlike ordinary bilinguals, who normally either talk or listen to someone in a particular language (although they might well listen to something in one language to then respond in a different one, including changing language half-way through), simultaneous interpreters have to do both at the same time, as the input and their output overlap about three quarters of the time.

 


When ordinary listeners are put in a situation where they produce language (even if non-sensical speech, such as one syllable) whilst listening, their working memory ability is significantly impacted, diminishing their ability to recall what has been said. For most people, this is due to a mechanism called subvocal rehearsal, which allows the brain to keep auditory information activated for further processing, being hampered.

Somehow – we do not yet know how – interpreters are not, or are much less affected by this phenomenon, suggesting that they are somehow able to compensate for it. The seemingly mechanical challenge of constantly having to talk over the voice they actually want to hear, however, is only the tip of the iceberg when it comes to the sources of load in simultaneous conference interpreting. The task is potentially compounded by challenges inherent to the features of the input.

The specific cognitive obstacles simultaneous conference interpreters contend with on a daily basis and how they tackle them will be the next stop on our journey.

Part 3 - A loaded question - Is all simultaneous interpreting equally demanding?
 

About the Author

Kilian Seeber

Kilian G. Seeber is associate professor and Vice Dean of the University of Geneva’s Faculty of Translation and Interpreting (FTI). He is the program director of the MA in Conference Interpreting (MACI) and the MAS in Interpreter Training (MASIT) as well as PI in the Laboratory for Cognitive Research in Interpreting (LaborInt) and in the Laboratory on Interpreting and Technology (InTTech). Kilian earned a graduate degree in Translation and Interpreting from the University of Vienna (Austria), as well as a postgraduate degree and a PhD in Interpreting from the University of Geneva (Switzerland) before completing his postdoctoral work in psycholinguistics at the University of York (United Kingdom). Kilian’s research interests include cognitive load and multimodal processing during complex language processing tasks, topics on which he has published widely. ( kilian.seeber@unige.ch)












Back to the AIIC Blog