The Translator’s Amanuensis 2020
Elisa Alonso, Universidad Pablo de Olavide de Sevilla
Lucas Nunes Vieira, University of Bristol
ABSTRACT
This paper is an exercise of imagination. Based on Kay’s (1980) inspiring idea of a translator’s amanuensis, we attempt to describe a post-editing tool that enables ubiquitous translation (Cronin 2010). We argue that a parallelism exists between media remediation (Bolter and Grusin 1999) and the shifting phase translation is undergoing, with machine translation post-editing having an impact on the global workflow of translated content. We take the hybridisation of traditional and machine translation processes as a starting point to envisage the features of forthcoming translation technologies. Results of previous surveys helped us to select features expected to play a central role: versatile devices to which we broadly refer as displayers would enable ubiquity; a relevant knowledge feature would provide human translators with a well-assorted repertoire of reliable sources; and an effort prediction feature would provide post-editors with reliable estimates of how much work lay ahead. Interacting with the Translator’s Amanuensis 2020 would not always be straightforward, however. Translators will have to adapt to richer ways of reading and visualising information. Ultimately, we argue that the Translator’s Amanuensis 2020 could benefit from existing Translation Studies concepts: the study of translation problems, translation competence models, and the ethics and sociology of translation.
KEYWORDS
Translation tools, computer-assisted translation, translation memory systems, machine translation, post-editing tools, media, remediation.
1. Introduction
Considering recent advances, and how computing in general and CAT systems in particular have evolved, any prediction is risky. Change is hardly expected to slacken, so attempting to envision state-of-the-art in 2020 would be guesswork at best. What is virtually certain is that by then, the systems of today will look as outdated as DOS-based software looks now (Garcia 2015: 85).
The debate around the ideal translator’s workstation – or the translator’s amanuensis, as first enounced by Martin Kay in 1980 – has been revisited within computational linguistics and translation studies over the last thirty years. How should technology assist human translators (e.g. Hutchins 1998; Melby 2006; Alonso and Calvo 2015)? What is the proper place of humans and machines in language translation (Kay 1980)? What is the proper place of professionals, non-professionals and machines in web translation (García 2010; Alonso and de la Cova 2016)? How should technological skills contribute to translator training (Enríquez Raído 2013; Calvo 2015)? Why should translators perform post-editing of machine translation (MT) (Declercq 2015)? A prescriptive rhetoric emanates from these questions, with suggestions on the different ways in which the translation community should adapt to a changing landscape.
Studies with a more descriptive aim arising from software manufacturers, the translation industry and academia have tried to evaluate or compare existing translation workstations (e.g. García and Stevenson 2009; Vieira and Specia 2011; O’Brien 2013; Alonso and de la Cova 2014). In the last fifteen years we have also witnessed an increasing number of empirical investigations on the use of technology in translation, often in the form of surveys (e.g. Fulford and Granell-Zafra 2005; Lagoudaki 2006; Durán Muñoz 2010; Torres 2012; Guerberof Arenas 2013; Alonso 2015), cognitive and/or ethnographic studies (e.g. O’Brien 2006, 2008; Dragsted 2008; Désilets et al 2009; LeBlanc 2013) or focusing on processes and applications (O’Brien et al 2014). There is also an increasing interest in the ergonomics of translation, with studies urging for enhanced translation tools that do not harm translators’ health or curb creativity (Lavault-Olléon 2011; Ehrensberger-Dow 2015). This body of research provides us with first-hand information on the experiences and opinions of the translation industry’s real actors. Thanks to these studies, we are in a position to know what translators’ needs and difficulties are, what tools and resources they turn to while translating and what features they consider positive or negative in a translation tool.
According to Chan (2015: 26), the increasingly fast development of MT and computer-aided translation since their inception in the 1940s and in 1967, respectively, ‘will maintain its momentum for many years to come’. As Garcia argues, it is risky to predict how translation technologies will evolve, but there seems to be a consensus – at least among the optimistic – around the idea that post-editing will be key in forthcoming years, which underlines the importance of focusing on the implications and consequences of this form of human-computer interaction:
Given all this technological ferment, one might wonder how professional translation software will appear by the end of the present decade. Technology optimists seem to think that MT post-editing will be the answer in most situations, making the translator-focused systems of today redundant (Garcia 2015: 85).
In this paper, we discuss the different possibilities presented by post-editing and how this form of human-computer interaction might shape translation practices and the different tools used by translators in a not-so-very distant future. Post-editing is understood here as ‘a process of improving through modification (rather than revision) a machine-generated translation, often eyeing a minimum of effort on behalf of the post-editor’ (Declercq 2015: 485). Our methodology pursues an interpretative – and at times philosophical – approach. This paper is to a large extent an exercise of imagination that tries to envisage future scenarios for translation – similar to those depicted in the cyborg translation paper written by Robinson (2003). We do so by triangulating different data sources currently available with the aim of reflecting –modestly and within our limitations – on the design and features that the translator’s amanuensis could incorporate. While it is, indeed, probable that translation tools will go through drastic changes in the short term, as predicted by Garcia (2015), the reference made to the year 2020 in this paper is largely symbolic; the features presented here are discussed as likely additions to translators’ workstations in the near future, but without a precise date being proposed for their implementation.
2. Translation as new media
Over the last decades we have been witnessing a reformulation of translation as a social construct, as a discipline, and as a process. As in many other fields, in translation the impact of the internet has marked ‘the start of a new era: an era characterized by a radical break with past concepts and models of thought’ (Alonso and Calvo 2015: 136). Once thought of as lone workers, translators are now increasingly connected through forums, servers and cloud technologies (see ibid: 139), a fact that changes the perception of translation as an individual practice, moving it towards an increasingly collective activity, in a technological turn that gives rise to new formats and devices.
According to Jenkins (2008: 13-14), the concept of a medium can be approached at two levels: ‘on the first, a medium is a technology that enables communication; on the second, a medium is a set of associated “protocols” or social and cultural practices that have grown up around technology’. From a broad perspective one could draw a parallel between the paradigm shift that translation is undergoing and what is happening to traditional media. According to Bolter and Grusin (1999), media continuously go through trends of immediacy, hypermediation and remediation, i.e. the process of offering immediate and automatic accessibility to a medium user, the process of presenting users with a wealth of information that reminds them of the medium’s possibilities, and the process of having one medium represented in another, respectively. These processes are not new. Throughout history, we have witnessed examples of how new media have remediated previous ones. For instance, having books being remediated as films and having films being remediated as TV or Internet entertainment. Following our parallelism –that translation is comparable to media – the infiltration of machine translation in the global workflow of translated content could be considered a process of remediation, where what once was communicated via a human-only activity is now communicated via an automatic, computerised process. This parallelism takes place at both levels with which to approach the concept of a medium described by Jenkins (2008: 13-14). At the first level, machine translation can be regarded as the new medium enabling communication, while, at the second level, machine translation can be seen as a broader set of procedures that change the ways in which translation is perceived and carried out, for example via post-editing, with the raw MT output being used for gisting, or with human translators and machine translation systems interacting in the process of producing the text (e.g. in interactive MT, when the MT output adapts itself to human translators’ edits. See Lilt (n.d.)).
The logics of the process of remediation applied to the case of translation would imply that MT – the emerging medium – strives to be perceived as a more immediate experience than traditional translation. However, arguably the Holy Grail of global communication, MT technology resorts to human translation through a number of diverse mechanisms: by incorporating post-editing practices, by adhering to quality standards normally applied to human translation, by mining and processing human-produced corpora, or by trying to emulate neural connections of the human brain as in recently developed MT system architectures based on neural networks (Cho et al. 2014).
Indeed, processes of remediation can come about in a number of shapes. According to Bolter and Grusin (1999: 45), the range and diversity of these shapes depend on the amount of competition between the old medium and the new. In view of this, we envisage that the forthcoming repertoire of translation modalities will be rather heterogeneous; multilingual communication will not be instant, automatic and ubiquitous (i.e. distributed everywhere and embedded in most devices used on a day-to-day basis [Weiser 1993; Cronin 2010]) in all cases, as the degree to which a process of this kind is expected to occur will depend on the rivalry between the new medium and the old, as mentioned by Bolter and Grusin (1999: 45). We would argue that translation will not gain from immediacy and ubiquity in scenarios where it is appreciated as essentially the human process of understanding, re-expressing and linking cultures, in line with the concept of cultural translation (Pym 2010: 138), introduced in recent Translation Studies research. In this theoretical framework, translation is not understood as a mere commercial product, as is usually the case in, for example, localisation. We would argue that this focus on the process, rather than the product, might relax the rivalry between old and new in remediation procedures. While the products resulting from the process of linking cultures may also be achieved through automatic means (i.e. with cultures becoming more mutually understandable as a result of machine translation), the human-centred intellectual benefits arising from the process of embarking on this journey are, in our view, likely to remain untouched by advances in technology.
In contexts that focus mainly on the translated product, there are (and there will be) trends of expansion (i.e. overlap and interconnection) (Bolter and Grusin 1999) between machine and human translation. In these cases, we expect machine translation to become increasingly ubiquitous, being integrated into different platforms and devices. We already see that machine translation is not restricted to written forms of communication; we expect its application to audio-visual content to flourish on even larger scales. It is worth noting, however, that convergence tendencies (i.e. integration) (Bolter and Grusin 1999) are becoming apparent, with different types of technology being inter-connected with networks of people upon unified platforms, as mentioned by Declercq (2015: 488). This means that processes of both expansion and convergence are expected to surround the remediation between human and machine translation. The ways in which this might come about are addressed in more detail in the following sections, where we describe how we expect this tension between expansion and convergence to shape the horizon of multilingual communication in the near future.
3. The Translator’s Amanuensis 2020
In a broad sense, we envisage the Translator’s Amanuensis 2020 (TA2020) being used at two distinct levels: one serving the general public in their daily translating needs, providing instant machine translation (henceforth referred to as ‘the utility level’); and one serving different actors involved with translation in professional settings, which incorporates features such as 3D visualisation, screen and speech input modes, advanced documentation aids, and eye tracking, (henceforth referred to as ‘the expert level’).
Previous documents and empirical investigations taken into account in envisaging TA 2020 features include, at the utility level, TAUS’s (Translation Automation User Society) (n.d.) mission statement about the automation of translation and a European Union survey on multilingualism (European Commission 2012) and, at the expert level, surveys conducted by Alonso (2015), Corpas and Roldán (2014), and Durán Muñoz (2010). We discuss TA2020’s two levels in detail below together with what we predict to be its main ethical issues.
3.1 TA2020 Utility level
TAUS’s (n.d.) mission statement about the automation of translation can be used as a starting point for a discussion on TA2020’s utility level: “We envision translation as a standard feature, a utility, similar to the internet, electricity and water.” Clear from this statement is the ubiquitous aspect of translation as a resource every human being should have access to, a level of accessibility that would be able to “push the evolution of human civilization to a much higher level of understanding, education and discovery.”
Far from a new realisation, the benefits of multilingualism and access to content in different languages are also stressed in more specific contexts. A survey conducted by The Economist highlights the importance that global corporations attach to multilingual communication and the understanding of cultural differences in succeeding at international level (Economist Intelligence Unit 2012). Similarly, a survey conducted at European level revealed that Europeans consider that multilingualism and translation are important for employability and international mobility (European Commission 2012). In a qualitative study based on interviews with Chinese citizens living in international settings, Zhang (2016) has found that communication problems are frequent even in the case of people with a good command of the foreign language, where understanding and/or producing statements might prove to be a challenge because of a poor accent, grammar limitations, or a shortage of vocabulary. According to her results, these individuals’ daily use of machine translation is inevitable. Zhang’s conclusions point to a fact that is often taken for granted: communicating in a global world still is a challenging task. All of this reveals that, despite people’s struggle to learn foreign languages, despite government efforts to promote multilingualism, despite the amount of content that is translated daily by companies, institutions, and individuals, a multilingual world remains a brave new world. For this reason, free machine translation systems are popular among global users:
With the rapid uptake of machine translation at a low entry level, but also on mobile phones and on tablets, the perception of translation from the global user’s perspective is changing dramatically (Declercq 2015: 488).
Unsurprisingly, at the utility level TA2020 is a ubiquitous technology; it can be used on any displayer1 (smartphones, e-glasses, e-paper, screens, e-pads, augmented reality devices, etc.), on computers, at home, at work, anywhere. TA2020 is embedded in displayers; it is a universal technology like, for example, web browsers are today. TA2020 can be used for private purposes and users can set up their displayer to show content in a certain target language regardless of the content’s original language. The TA2020’s utility level allows users to have universal access to information.
As well as this universal access to TA2020, displayer providers may offer a paid service to their clients, in which case TA2020 provides fine-tuned translations based on the user’s needs. We expect the provider to be able to track the user’s behaviour and needs; they would have access to information on the user’s job, hobbies, and internet use patterns, with TA2020 providing better translations as a result.
3.2 TA2020 Expert level
At the expert level, TA2020 is used as a tool to hire and carry out paid translations in a professional setting. Previous studies reporting translators’ needs and the technical challenges they face provide numerous indications of features that would be desirable for TA2020 at this level. Based on a survey conducted in 2013 among 412 subjects, Alonso (2015) concludes that terminological or lexical needs are among the most frequent issues faced by translators and that finding reliable sources on the use of a term in context is also among their common concerns. These results are consistent with a trend outlined in previous surveys, such as for example those of Corpas and Roldán (2014) and of Durán Muñoz (2010: 89), who states that:
[…] there is a growing interest in developing translation-oriented tools, either applications to improve searches […] or terminological resources (specialised dictionaries, glossaries, etc.) so as to offer reliable sources of information. However, we observe that there is still a lack of this type of tool and more research should be carried out, above all on the editing phase of terminological projects and the consultation options provided afterwards (Durán Muñoz 2010: 89).
Interestingly, as well as terminological or documental needs, respondents in Alonso’s (2015) study also outlined unexplored limitations, reporting that, more than finding out the meaning of a word, they often need to actually visualise the corresponding concept or object (i.e. the signified) by viewing images associated to it (Alonso 2015: 97).
A number of specific features could be envisaged based on results of these studies. When translators interact with TA2020, they will be presented with a draft translation to be built upon. However, they will need further support from TA2020 in order to satisfy their needs. For this purpose, TA2020 incorporates the ability to: a) parse the source content (whether written or audio-visual); b) identify keywords (key concepts), topics, and genre; c) mine virtual content (publicly available and private knowledge bases) and social media in order to find relevant and reliable sources of information to be consulted in the translating process (websites, parallel multilingual content, images, augmented reality output2, videos, news, reports), previous translations, and relevant multimodal content. We refer to this feature as relevant knowledge, a function that would show all this information interactively to the translator. This feature can be triggered at the translator’s command or when the TA2020 finds evidence of translators’ cognitive effort or of translation difficulties/problems.
For the purpose of tracking cognitive effort, TA2020 uses the displayer’s eye‑tracking devices, as well as metrics based on translating time and hesitation. The development of this feature relies to a large extent on cognitive research conducted in the field of post-editing, for example within the CASMACAT and the SEECAT projects. Krings (2001: 179) defines cognitive effort in post-editing as ‘the type and extent of cognitive processes’ required for correcting or improving the MT output, a concept that he distinguishes from mere mechanical operations involved in typing, to which he refers as ‘technical effort’ (ibid). Previous research suggests that the number of editing operations is not necessarily linked to cognitive effort (Koponen 2012). As per previous work, translators’ cognitive effort will become evident when they, for example, pause and hesitate many times during the translation process or when they spend longer intervals of time gazing at specific areas of the displayer (see Vieira 2016a for an overview).
Together with this capability of logging psychophysiological data, we also expect TA2020 to consider the outcomes of corpus-based research where existing categorisations of translation difficulties or problems (Nord 1997; Hurtado 2000; Toury 2012) have been deciphered and can be identified via parsing and automatic textual analysis procedures. We also predict that TA2020 will automatically flag the repertoire of problems often discussed in existing translation competence models (Kelly 2002; PACTE 2003; Göpferich 2009; Pym 2012a).
It could also be envisaged that this parsing procedure would be applied to price quoting. While automatic MT quality scores that do not require human reference translations are already commercially available (e.g. the SDL TrustScore, SDL BeGloblal), this technology is in its early years and the results it produces are not reliable enough to be exploited for pricing (see TAUS 2013). We expect source-text analysis and machine-translation quality estimation technology to improve exponentially in the following years, providing post-editors with an effort prediction feature that will make it possible to reliably estimate the amount of work ahead based on the translating difficulty of the source text as well as on the quality of the machine translation. This feature is expected to provide more reliable parameters for pricing estimation than those utilised in current practices, which are largely based on potentially misleading elements such as volume of source text or amount of changes carried out in the machine translation output (i.e. in post-editing – see Vieira [2016b: 189-190]).
Traditionally, working with translation memory systems or post-editing tools implied that the translator had to deal with chunks of decontextualized texts or segments (Biau-Gil and Pym 2006: 12), thus preventing translators from having a general sense of the whole text, potentially constraining their creativity (Ehrensberger-Dow 2015). We expect the display of source and target content to be different in TA2020, with segmentation not being apparent for the translator; instead, the translator would usually have at the foreground of the displayer a visualisation of target content and, at the background (a kind of shadow text behind the scenes), a visualisation of source content. We call this feature 3D visualisation. These visualisations would be synchronised (if the translator scrolls up or down, forward or backwards in the source, the target scrolls too and vice versa). We expect translators to adapt their reading capacity to this new way of reading. This phenomenon is comparable to what happened with the advent of print culture and digital culture, since both paradigms have had an impact on the practices of reading and writing through history (Littau 2011).
A number of additional features inspired by general trends in technology could be envisaged for TA2020’s expert level. For instance, while it is safe to assume that TA2020’s interface and procedures can be highly customised, it also seems plausible that TA2020 would offer a straightforward translation procedure where source content is uploaded to a simpler, abbreviated interface that would be suitable in situations where an urgent machine and/or human translation is required.
Translators would be able to use TA2020 both in screen mode and in speech mode. In screen mode, the user can type, drag or capture their translation, as well as their comments and queries. In speech mode, TA2020 would have a human-like conversational interface; the user would be able to dictate their translation, add comments and make queries. In either mode, there would be no need for the user to deal with formatting issues, as TA2020 would be able to keep source content, format, layout and flows.
3.3 Ethical considerations
The first ethical aspect to consider with regard to a more widespread use of MT regards the potential threats that technology is perceived to pose to translators’ professional standing, an issue frequently debated in previous research. In contrast to everyday users who have rapidly embraced MT, according to the survey by Alonso (2015: 99) professional translators are still reluctant to incorporate MT systems into their toolbox. This is at least what Alonso observed among freelance translators, who prevailed in her sample.
Previous studies in academia often interpret this reluctance as translators’ final attempt to keep control over a process that they consider sacred. Declerq (2015: 489) suggests that translators should overcome their fears towards machine translation and incorporate post-editing within their translating practices. The usual argument is that, for this to happen, traditional perceptions of translation would need to be changed, with translators understanding that translations have another life after the translation process itself. Pym has expressed a similar idea in his work about translators’ ethics as follows:
As we have seen, the professional reaction to these technologies has still mostly been negative. Horror stories circulate of disasters caused by machine translations; condescending smiles greet claims that anyone can translate; we are still assured that machine and the masses will never penetrate the sanctum of expert knowledge. […]
They [translators] still think they can sell a specialized production process; they oppose the integration of machine translation and volunteers. Increasingly, they will have to realize that what they sell is their seal of approval, their trustworthiness, their responsibility (Pym 2012b: 86).
We can only partially agree with these statements and probably the whole issue – complex as it is – deserves further consideration. As stated above, translation – human and machine – is currently undergoing a process of remediation ruled by tensions of convergence and expansion. While human and machine are nowadays highly interdependent, the translation profession is increasingly fragmented (Katan 2009): on the one hand, commercial institutions and organisations such as TAUS seem to regard the target text as no more than a product with some monetary value, while, on the other hand, those who study or carry out translation out of choice seem more interested in the process itself, which they fear might be reduced to the click of a button. Personally, we have doubts that automated forms of translation will ever prevail in situations where the intellectual benefits of this process occupy centre stage. We envisage both these perspectives on translation having a harmonious co-existence.
In more practical terms, the rarely addressed issue of copyright and compensation for material that is to be re-used in translation memories or for MT training also merits attention. What we would suggest is that when translators provide material to be used in training or improving MT systems, or material to be incorporated in clients’ translation memories, they are being asked somehow to donate a part of their know-how, and for this reason they should be properly compensated for that. The Model Terms of Business proposed by the Institute of Translation and Interpreting (ITI n.d.), in the UK, constitute an interesting starting point in this respect, since the terms recommend that the copyright holder of translations (translators in most cases) should charge a fee if agencies/clients are to use their translations in a translation memory. While, as we have mentioned, this would be a desirable practice, we doubt practitioners actually implement it.
Similarly, issues of ownership and translation permission are likely to arise. With a paradigm of ubiquitous translation and TA2020 embedded in any displayer, content owners might feel afraid of having, for example, their new marketing campaign for trainers ruined because of (poor, nonsense) MT output that might be widely available in a number of different devices as a result of a process that is beyond their control. What might happen in a context where MT is provided as a utility is that certain individuals or corporations would want to block their content to prevent displayers from showing non-approved translations. If that happened, it would be interesting to know what technological and legal devices would be designed in this context and how these would be implemented.
In relation to workflow processes between clients and translators, we envisage that clients will be able to share with the language service provider or the translator an encrypted link that enables the translation of their content. As for the potential issues brought about by a fast and wide-ranging diffusion of translated content, clients might prefer to validate the translation before making it publicly available. In most cases, however, we would expect them to want their translation to be available for the displayers as soon as possible. Ultimately, translating would basically consist of connecting to the content that needs to be translated and interacting with TA2020 through the displayer.
4. Closing remarks
Despite the risks of predicting features that would be offered by a new generation of the translator’s amanuensis, exercises of imagination can be a source of reflection and debate. The review of recent surveys of translators’ needs and the state of the art in translation technologies provided in this paper makes it clear that post-editing is a form of human-computer interaction that is set to become ever more prominent in the field of translation. Consequently, in the coming years, we will continue to witness mutual forms of remediation and tensions of expansion and convergence between human and machine translation.
We have described TA2020 as a ubiquitous translation platform that is embedded in devices we have referred to displayers. We have not discussed the ways in which the technology used by these displayers will be possible (which is beyond the scope of this article), but what seems to be clear is that hardware will be key in these developments. In addition, TA2020 will require new forms of energy storage, distribution and production, which are probably being tested in other fields.
Certain features of TA2020, like the relevant knowledge function, exhibit advanced techniques of data mining and processing, sophisticated applications of eye tracking, cognitive effort metrics, and an amazing capacity to process and analyse corpora. The seeds of these developments already exist. Interestingly, some characteristics of TA2020 would demand psychophysical adaptations in humans, such as an expanded ability to process multimodal information, and adapt to new reading paradigms. There are, however, paths that remain less explored, such as, for example, the different ways in which to integrate common Translation Studies concepts into the field of machine translation: translation competence models, categorisations of translation problems, sociological and ethical issues, and so forth. Hopefully, we will hear about exciting developments in these areas in the near future.
Bibliography
- Alonso, Elisa (2015). “Analysing Translation Professionals in the Information Society and their Use and Perceptions of Wikipedia.” JoSTrans, The Journal of Specialised Translation 23, 89-116. http://www.jostrans.org/issue23/art_alonso.pdf (consulted 15.02.2017)
- Alonso, Elisa and Elisa Calvo (2015). “Developing a Blueprint for a Technology-mediated Approach to Translation Studies.” Meta: Journal des traducteurs / Meta: Translators' Journal 60 (1), 135-157.
- Alonso, Elisa and Elena de la Cova (2014). “Apuntes metodológicos para la aplicación de la socionarrativa a la evaluación de herramientas de traducción: ‘Érase una vez Google Translator Toolkit.’” Revista Tradumàtica 11, 508-523. http://revistes.uab.cat/tradumatica/article/view/57/pdf (consulted 15.02.2017)
- — (2016). “Machine and Human Translators in Collaborative Contexts.” María Azahara Veroz González and María Luisa Rodríguez Muñoz (eds) (2016). Languages and Texts. Translation and Interpreting in Cross-Cultural Environments. Córdoba: UCOPress, 11-23.
- Biau-Gil, José Ramón and Anthony Pym (2006). “Technology and Translation.” Anthony Pym, Alexander Perestrenko and Bram Starink(eds.) (2006). Translation Technology and its Teaching. Tarragona: Intercultural Studies Group, Universitat Rovira i Virgili, 5-19.
- Bolter, Jay David and Richard Grusin (1999). Remediation. Understanding New Media. Massachusetts: MIT Press.
- Calvo Encinas, Elisa (2015). “Scaffolding translation skills through situated training approaches: progressive and reflective methods.” The Interpreter and Translator Trainer 9 (3), 306-322. doi: 10.1080/1750399X.2015.1103107.
- Chan, Sin-way (ed.) (2015). The Routledge Encyclopedia of Translation Technology. London/New York: Routledge.
- Cho, Kyunghyun, Bart van Merriënboer, Dzmitry Bahdanau and Yoshua Bengio (2014). “On the properties of neural machine translation: Encoder-decoder approaches.” Proceedings of the SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation. Association for Computational Linguistics, 103-111. http://www.aclweb.org/anthology/W14-4012 (consulted 15.02.2017)
- Corpas Gloria and Marina Roldán (2014). “Análisis de necesidades documentales y terminológicas de médicos y traductores médicos como base para el diseño de un diccionario multilingüe de nueva generación.” MonTI 7, 167-202. http://www.raco.cat/index.php/MonTI/article/viewFile/292105/380612 (consulted 15.02.2017)
- Cronin, Michael (2010). “The Translation Crowd.” Tradumàtica 8, 1-7. http://www.raco.cat/index.php/Tradumatica/article/view/225900 (consulted 15.02.2017)
- Declercq, Christophe (2015). “Editing in Translation Technology.” Sin-way Chan (ed.) (2015). The Routledge Encyclopedia of Translation Technology. London/New York: Routledge, 480-493.
- Désilets, Alain, Christiane Melançon, Geneviève Patenaude and Louise Brunette (eds) (2009). “How Translators Use Tools and Resources to Resolve Translation Problems: An Ethnographic Study.” MT Summit XII - Workshop: Beyond Translation Memories: New Tools for Translators MT, Ottawa, 29 August 2009. http://www.mt-archive.info/MTS-2009-Desilets-2.pdf (consulted 15.02.2017)
- Dragsted, Barbara (2008). Segmentation in Translation as a Distributed Cognitive Task. Iriwl Dror and Stevan Harnad (eds) (2008). Cognition Distributed: How Cognitive Technology Extends Our Minds. Amsterdam /Philadelphia: John Benjamins, 237-256.
- Durán Muñoz, Isabel (2010). “Meeting translators’ needs: translation-oriented terminological management and applications.” JoSTrans,The Journal of Specialised Translation 18, 77-93. http://www.jostrans.org/issue18/art_duran.pdf (consulted 15.02.2017)
- Economist Intelligence Unit (2012). Competing across borders. How cultural and communication barriers affect business. The Economist. http://www.economistinsights.com/countries-trade-investment/analysis/competing-across-borders/fullreport (consulted 15.02.2017)
- Ehrensberger-Dow, Maureen (2015). “An Ergonomic Perspective of Professional Translation.” Meta: Journal des traducteurs / Meta: Translators' Journal 60 (2), 328. doi: 10.7202/1032879ar
- Enríquez Raído, Vanessa (2013). “Teaching Translation Technologies ‘Everyware’: towards a Self-Discovery and Lifelong Learning Approach.” Tradumàtica 11, 275-285. https://ddd.uab.cat/pub/tradumatica/tradumatica_a2013n11/tradumatica_a2013n11p275.pdf (consulted 15.02.2017)
- European Commission (2012). Europeans and their Languages. Special Eurobarometer 386. http://ec.europa.eu/public_opinion/archives/ebs/ebs_386_en.pdf (consulted 15.02.2017)
- Fulford, Heather and Joaquín Granell-Zafra (2005). “Translation and Technology: a Study of UK Freelance Translators.” JoSTrans, The Journal of Specialised Translation 4, 2–17. http://www.jostrans.org/issue04/art_fulford_zafra.pdf (consulted 15.02.2017)
- García, Ignacio (2010). “The Proper Place of Professionals (and Non-Professionals and Machines) in Web Translation.” Tradumàtica 8, 1–7. http://www.raco.cat/index.php/Tradumatica/article/view/225898/307309 (consulted 15.02.2017)
- — (2015). “Computer-aided translation.” Sin-way Chan (ed.) (2015). The Routledge Encyclopedia of Translation Technology. London/New York: Routledge, 68-87.
- Garcia, Ignacio and Vivian Stevenson (2011). "Google Translator Toolkit." Multilingual, 106, 16-18.
- Göpferich, Susanne (2009). “Towards a Model of Translation Competence and its Acquisition: The Longitudinal Study TransComp.” Susanne Göpferich, Arnt L. Jakobsen and Inger Mees (eds) (2009). Behind the Mind: Methods, Models and Results in Translation Process Research. Copenhagen: Samfundslitteratur, 11–37.
- Guerberof Arenas, Ana (2013). “What do professional translators think about post-editing?” The Journal of Specialised Translation 19, 75–95. http://www.jostrans.org/issue19/art_guerberof.pdf (consulted 15.02.2017)
- Hurtado, Amparo (2000). Traducción y traductología. Madrid: Cátedra.
- Hutchins, John (1998). “Translation technology and the translator.” Machine Translation Review British Computer Society 7. http://hutchinsweb.me.uk/ITI-1997.pdf (consulted 15.02.2017)
- Katan, David (2009). “Translation Theory and Professional Practice: A Global Survey of the Great Divide.” Hermes-Journal of Language and Communication Studies 42, 111-154. http://download2.hermes.asb.dk/archive/download/Hermes-42-7-katan_net.pdf (consulted 15.02.2017)
- Kay, Martin (1980). “The proper place of men and machines in language translation.” Palo Alto, CA: Xerox Palo Alto Research Center. [Reprinted in: Machine Translation (1997) 12 (1-2), 3–23]. http://www.mt-archive.info/70/Kay-1980.pdf (consulted 15.02.2017)
- Kelly, Dorothy (2002). “Un modelo de competencia traductora: bases para el diseño curricular.” Puentes 1, 9-20.
- Koponen, Maarit (2012). “Comparing human perceptions of post-editing effort with post-editing operations.” Proceedings of the Seventh Workshop on Statistical Machine Translation. Association for Computational Linguistics, 181-190. http://www.statmt.org/wmt12/pdf/WMT23.pdf (consulted 23.03.2017)
- Krings, Hans P. (2001). Repairing texts: empirical investigations of machine translation post-editing processes. Kent: Kent State University Press.
- Jenkins, Henry (2008). Convergence Culture: Where Old and New Media Collide. Revised edition. New York: New York University Press.
- Lagoudaki, Elina (2006). “Translation Memories Survey 2006. User’s perceptions around TM use.” Translation and the Computer 28, ASLIB, 1–29. http://mt-archive.info/Aslib-2006-Lagoudaki.pdf (consulted 15.02.2017)
- Lavault-Olléon, Élisabeth (2011). “L’ergonomie, nouveau paradigme pour la traductologie.” ILCEA Traduction et Ergonomie 14. https://ilcea.revues.org/1078 (consulted 15.02.2017)
- LeBlanc, Matthieu (2013). “Translators on Translation Memory TM. Results of an Ethnographic Study in Three Translation Services and Agencies.” The International Journal for Translation & Interpreting Research 5 (2), 1-13.
- Littau, Karin (2011). “First Steps towards a Media History of Translation.” Translation Studies, 4(3), 261-281.
- Merriam-Webster(2015). “Augmented reality.” http://www.merriam-webster.com/ (consulted 15.02.2017)
- Melby, Alan (2006). “MT+TM+QA: The Future is Ours.” Tradumàtica 4, 1–6. http://www.fti.uab.es/tradumatica/revista/num4/articles/04/04.pdf (consulted 15.02.2017)
- Nord, Christiane (1997). Translation as a purposeful activity. Functionalist approaches explained. Manchester: St Jerome.
- O’Brien, Sharon (2006). “Eye Tracking and Translation Memory Matches.” Perspectives: Studies in Translatology 14 (3), 185–205.
- — (2008). “Processing Fuzzy Matches in Translation Memory Tools: An Eye-Tracking Analysis.” Susanne Göpferich, Arnt L. Jakobsen and Inger Mees (eds) (2008). Looking at Eyes. Eye Tracking Studies of Reading and Translation Processing. Copenhagen: Samfundslitteratur, 79–102.
- — (2013). “The Borrowers: Researching the Cognitive Aspects of Translation.” Target 25 (1), 5–17.
- O'Brien, Sharon, Laura Winther Balling, Michael Carl, Michel Simard and Lucia Specia (eds) (2014). Post-editing of Machine Translation: Processes and Applications. Newcastle: Cambridge Scholars Publishing.
- Oxford English Dictionary (2001). “Medium.” Oxford University Press. http://www.oed.com/view/Entry/115772?redirectedFrom=medium#eid (consulted 15.02.2017)
- PACTE (2003). “Building a Translation Competence Model.” Fabio Alves (ed.) (2003). Triangulating Translation. Amsterdam: John Benjamins, 43-66.
- Pym, Anthony (2010). Exploring Translation Theories. London: Routledge.
- — (2012a). “Translation Skill-sets in a Machine-translation.” 16th Symposium on Interpreting and Translation Teaching, Fu Jen Catholic University, Taiwan. http://usuaris.tinet.cat/apym/on-line/training/2012_competence_pym.pdf (consulted 15.02.2017)
- — (2012b). On Translator Ethics: Principles for mediation between cultures. Amsterdam/Philadelphia: John Benjamins Publishing.
- Robinson, Douglas (2003). “Cyborg Translation”. Susan Petrilli (ed.) (2003). Translation Translation. Amsterdam/New York: Rodopi, 369-386.
- Torres Domínguez, Ruth (2012). “2012 Use of Translation Technologies Survey.” Mozgorilla. http://mozgorilla.com/en/texnologii-en-en/translation-technologies-survey-results/ (consulted 15.02.2017)
- Toury, Gideon (2012). Descriptive Translation Studies and beyond: Revised edition. Amsterdam/Philadelphia: John Benjamins Publishing.
- Vieira, Lucas Nunes and Lucia Specia (2011). “A Review of Translation Tools from a Post-Editing Perspective.” Proceedings of the Third Joint EM+/CNGL Workshop Bringing MT to the User: Research Meets Translators (JEC 2011), 33-42 http://rgcl.wlv.ac.uk/papers/NunesSpecia_Jec2011.pdf (consulted 15.02.2017)
- Vieira, Lucas Nunes (2016a). “How do measures of cognitive effort relate to each other? A multivariate analysis of post-editing process data.” Machine Translation 30(1-2), 41-62.
- — (2016b). Cognitive Effort in Post-Editing of Machine Translation: Evidence from Eye Movements, Subjective Ratings, and Think-Aloud Protocols. PhD Thesis. Newcastle University.
- Weiser, Mark (1993). “Ubiquitous computing.” Computer 10, 71-72.
- Zhang, Chuxian (2016). Use and perceptions of Chinese electronic tool Youdao. Master dissertation. Universidad de Pablo de Olavide.
Websites
- CASMACAT, Cognitive Analysis and Statistical Methods for Advanced Computer Aided Translation. http://www.casmacat.eu/ (consulted 15.02.2017)
- Institute of Translation & Interpreting (ITI). http://www.iti.org.uk and ITI Model of business (n.d.) http://www.iti.org.uk/about-industry/advice-buyers/155-model-terms-of-business (consulted 15.02.2017)
- Lilt, Interactive, adaptive, translation platform. https://www.lilt.com/ (consulted 15.02.2017)
- SDL BeGlobal. http://www.sdl.com/cxc/language/machine-translation/beglobal/ (consulted 15.02.2017)
- SEECAT project. Speech and Eye-tracking enabling CAT. http://www.cbs.dk/en/research/departments-and-centres/department-of-international-business-communication/events/seecat-project-speech-eye-tracking-enabled-cat (consulted 15.02.2017)
- TAUS. Mission. https://www.taus.net/mission (consulted 15.02.2017)
- — (2013). “Pricing Machine Translation Post-Editing Guidelines.” TAUS, October 7. https://www.taus.net/academy/best-practices/postedit-best-practices/pricing-machine-translation-post-editing-guidelines (consulted 15.02.2017)
Biography
Elisa Alonso is a full-time lecturer and researcher in Translation Studies at the Universidad Pablo de Olavide (Seville, Spain), where she currently teaches at undergraduate and postgraduate level. She holds a PhD in Communication Studies, from Universidad de Sevilla, and a BA in Translating and Interpreting, from the Universidad de Granada. Her main research interests include the impacts of technology on sociological aspects of translation and on translators’ training. Before lecturing, Elisa Alonso worked as a software localiser at Lionbridge and as a freelancer.
E-mail: elialonso@upo.es
Lucas Nunes Vieira is a Lecturer in Translation Studies with Technology at the University of Bristol (UK). He holds a BA in language and linguistics from Universidade Federal Fluminense (Brazil), an Erasmus Mundus Master's degree in natural language processing and human language technology from Universidade do Algarve (Portugal) and Université de Franche-Comté (France), and a PhD on the topic of machine translation post-editing from Newcastle University (UK). His research interests include machine translation, translation editing, psycholinguistics, and corpus-based Translation Studies.
E-mail: l.nunesvieira@bristol.ac.uk
Endnotes
Note 1:
We refer to the term displayer as any type of device that enables human-computer interaction.Return to this point in the text
Note 2:
That is, “an enhanced version of reality created by the use of technology to overlay digital information on an image of something being viewed through a device (as a smartphone camera)” (Merriam-Webster Dictionary 2015).
Return to this point in the text