Beyond Translation Memory: Computers and the Professional Translator
Ignacio Garcia, University of Western Sydney
ABSTRACT
Translation has historically been performed by bilinguals equipped with specialised topic knowledge. In the mid 20th century, textual theory and discourse analysis saw emphasis on a top-down, whole-text approach that paved the way for modern professional translators as linguistic transfer experts. This professionalisation was further driven by the digital revolution in the 90s which caused a huge increase in translation demand, and the creation of purpose-designed translation tools—principally translation memory (TM). However, the same technological processes that briefly empowered the professional translator also signalled a return to a bottom-up approach by concentrating on the segment. Twenty years on, translation tools and workflows continue to narrow this focus, even tending towards simple post-editing of machine translated output. As a result, topic-proficient bilinguals are again entering mainstream translation tasks via simplified translation management processes and crowdsourcing approaches. This article explores these recent trends and predicts that, over the next decade, professional translators will find it increasingly difficult to survive as linguistic transfer experts alone.
KEYWORDS
Translation, localisation, localization, translation memory, machine translation, professional translation, translation as a utility, hive translation.
Introduction
The digital age has affected all professions, but change has been felt by translators more keenly than most. Like the rest of the ‘knowledge sector,’ translators are obliged to work on computer screens and do their research using the web. Unlike their colleagues however, they have been propagating this new work environment and fomenting change precisely by their role in translating it. The most significant tool used until now by translators in the digital work environment is Translation Memory software, or TM. By putting the developments of the last 20 years in historical perspective and with particular attention to events over the last two, this article argues that TM is reaching its use-by date. It also examines the strong re-emergence of Machine Translation (MT) in response to TM's inability to cope with the increasing translating needs of today’s digital age. Furthermore, this paper foresees the closure of the cycle which began when translation became an ‘independent’ profession, and an approaching future in which translation may once again be the realm of the gifted amateur or keen bilingual subject specialist.
Translation as a profession
Translation as a profession is only a recent development. For most of written history, translators were bilinguals with a particular ability or inclination to transfer text between languages, mentored (or not) by more experienced masters. They typically made their living from another primary activity, and applied their knowledge and insights to transferring key texts. Thus, physicians translated medical texts, public servants translated laws or treaties, theologians translated scripture, writers and poets translated literature, and so forth. This model continued unaltered well into the 20th century; it still persists in some sectors and is actually gaining ground in others.
Translation as an ‘independent’ profession only emerged towards the mid- twentieth century, when the old model could not cope and formal training within educational institutions took over from the previous guild-like approach. As the complexity of the translation task became better appreciated and its theoretical foundations were laid, the field was opened to a new professional class. Unlike their historical counterparts, modern translators were linguists trained in the craft of transferring meaning from one language to another, and they acquired specialised topic knowledge as an adjunct to their primary skill as text interpreters and rewriters.
Since the late 1980s the most dynamic sector of the translation profession has been that linked to translating digital content—translating for the screen, not for the printer; translating for localisation, not for publishing. Localisation, in its classic late nineties definition, means the linguistic and cultural adaptation of a product or service into another language or locale. It has translation at its core, but equally involves associated engineering and managerial tasks.
From the nineties onward, this shift went hand-in-hand with increasing demand, as the Information Technology industry realised that the task of translating user interfaces, user assistance, web pages, video games etc, far exceeded the capacities of its bilingual staff. This was the age of the Language Services Providers (LSP) that employed translation technology, the internet, and pools of professional translators and revisers to process large jobs efficiently and competently. Without professional intervention, the industry would have choked in the linguistic mess that the amateurs had been creating. Preparing candidates for this profession is what authors such as Gouadec (2007), McKay (2006) or Samuelsson-Brown (2004) and a large number of university courses are all about.
TM Beginnings
The Information Revolution did not just generate more work for translators, but also new tools aimed at boosting their productivity. One particular tool soon achieved prominence—and it was not machine translation (MT), as many pundits had been predicting since the 1950s. While computer sophistication and language algorithms were not yet enough for useful MT, the humble PC had abundant processing power and memory a-plenty for a low tech off-shoot of MT: translation memory (TM).
Essentially a data base application, TM allowed for recycling of past translations that afforded increased productivity and consistency, while its filters could handle digital file formats that word processors could not. TM became the interface between LSPs and freelance translators, allowing them to collaborate in large-scale translation projects.
TM was useful for most kinds of translation tasks, but came into its own with localisation. Ownership of and proficiency with an industry-compatible TM software suite soon became indispensable for aspirants to this kind of work. In fact, during its early phase, the main impetus in developing the technology came mostly from keen freelancers (Jochen Hummel and Iko Knyphausen of Trados, Emilio Benito of Déjà Vu), although emerging localisation agencies (Star-Transit) or big corporations (IBM Translation Manager) also played a role.
Over time however, what had commenced as a translator’s tool became something that language vendors imposed on their pool of freelancers, and finally—once major translation buyers became aware of the benefits—was in turn imposed on language vendors by corporations.
Over the course of the nineties, TM technology matured: applications became more stable and more powerful. They incorporated terminology management systems, alignment, and terminology extraction tools, then quality control (QC) and project management features, and eventually the capacity for batch handling of multiple files and formats, and simultaneously using several memories and glossaries. The evolution of this technology can be traced best by following reviews of individual products in industry journals (for example Benis 2003; Wassmer 2003).
Focus on segments
The role of the technical translator changed as a direct result of TM technology. Translators were no longer focused on translating texts, but segments, which were often displayed in the editing window in non-sequential fashion. When matches were found, the translator would have the option to accept them after checking and/or editing (but not translating). Rejected or empty segments would of course require old-fashioned translating from scratch, but always within the narrow context of the TM editor, rather than a ‘whole-text’ approach (Hennessy 2008).
This was a radical departure from the canonical translator’s role, and to a large extent ignored by translation research, training institutions and professional bodies. Nevertheless, users soon became accustomed, finding that after a period of adaptation they could achieve higher productivity.
For a brief honeymoon period, translators who embraced the new technology enjoyed the benefits of significant time savings and almost exclusive access to high-tech jobs. Moreover, the translation solutions they generated stayed on their hard drives, and over time increased in value as linguistic resources. This heyday involved what has been termed the ‘interactive mode.’
However, freelancers soon lost control of this technology to the emerging translation bureaus, which would eventually be known as Language Service Providers (LSPs). Now, translators were no longer accessing their own resident translation memories at will, but rather dealing with a ‘pre-translated’ file emailed or downloaded from an LSP. Under this mode, freelancers would receive a bilingual file with matches both exact and fuzzy, plus terms from the existing databases already inserted in the target section. This ‘pre-translation’ mode allowed LSPs to share the minimum information required, thus centralising resources and preventing collaborators from sharing them with other competing LSPs or clients. With little effort, LSPs could now multiply individual productivity gains by leveraging the memories and glossaries generated by hundreds of (mostly freelance) translators.
Wallis (2006) has studied how translators respond to these two ways of engaging with TM, and found that although there was not much difference productivity-wise, translators tended to prefer and to work more comfortably in the original interactive mode. But user-friendliness was not the critical issue. What galled freelancers was the fact that external databases contained segment matches they had not generated themselves. This meant more time checking and editing, yet entailed mandatory price reductions—the infamous ‘Trados discounts,’ so-called because of the pre-eminence of Trados among the big players. A search for ‘Trados discounts’ in the archives of any mainstream translator’s list (TM oriented or not) will reveal some very interesting threads.
By the turn of the century, it was the translation departments of big corporations that would take the technological initiative from LSPs. Now, corporate clients would retain their own memory and glossary repositories, and commission translations from possibly several competing LSPs— pushing prices down and obliging LSPs to conform to the chosen TM application or format.
From hard drive to server
As computer power and broadband connectivity increased, moving the databases from hard-drives to servers become feasible. Over the last few years, both language buyers and language vendors have keenly joined in. The pre-translation mode is rapidly being phased-out in favour of the emerging web-interactive mode, whereby translators now log in to the databases via their browser. Although access is still only one segment at a time, there is now the possibility of leveraging segments in real time, as and when they are created by other translators working remotely on the same project.
No empirical studies have been done yet on how this emerging web-interactive mode suits the translators who are shoehorned into it. Just by looking at the technology, and as confirmed by some anecdotal evidence, it appears to disadvantage them in at least four ways. Firstly, it imposes the tool they must use: whereas the pre-translation mode allowed a translator to work using, say, a Trados-compatible tool rather than Trados itself, with web-based technology this is no longer feasible. Secondly, on anecdotal evidence it slows the respective response times for opening each segment, searching the data base(s) and returning results, and closing (uploading) completed segments (tool developers might dispute this point, but comments from translators suggest otherwise). Thirdly, it makes it difficult for translators to build up their own linguistic assets, although in some cases with extra effort they might perhaps circumvent this. Lastly, it clearly gives LSPs access to performance-related information that most self-employed professionals would like to keep confidential: hours spent, translation speed, work patterns.
The web-interactive mode is therefore not serving freelance translators well—on the contrary, it seems to have deteriorated their working conditions (Garcia 2007). With payment on a per-word basis (for the most common languages at least), it contributes to continual downward pressure on rates as the expansion of internet services affords access to translators from countries with lower costs of living (see Chan 2008 for an interesting view of the overlap between localisation and the economy).
LSPs meanwhile have gained much broader control over the translation process. The sector has witnessed various mergers and acquisitions, developed complex systems to automate translation processes which previously relied on phone calls and email, and off-shored many engineering tasks to developing countries. And yet LSPs are struggling now too, while a look at globalisation journals (ClientSide News for example) indicates that language buyers are also unhappy with the current status quo—despite the chips having seemingly fallen all their own way.
The principal stumbling block is that, notwithstanding all the undeniable productivity gains from improvements in TM technology, translation remains a ‘manual’ activity. At this stage of web development, translation needs are growing exponentially with the emerging ‘Web 2.0’ community, and even with state-of-the-art technology and processes the present paradigm is inadequate. This is because key tasks must still be performed by capable humans, who are slow and expensive in comparison to machines.
Translating for the web
Whether we choose to use the Web 2.0 tag (O'Reilly 2005) or not, the cyber-scape of today is vastly different to that of the nineties. Software developers are moving data, computing tools, and even software development itself from the hard drive to servers in data macro-centres—or the ‘cloud,’ if you prefer the latest vogue metaphor (Haynie 2008). Instead of residing on the user’s own hard drive, applications are now increasingly accessed through a web browser in a trend known as SaaS or “Software as a Service” (SIIA 2001).
As for the way people use the web, that has also dramatically changed. It is not just the producer-centric venue of a decade ago, through which corporations and institutions could market goods and services to potential customers via hypertext files on ‘static’ pages. Now it also has a user-centric layer where we can connect with real and virtual ‘friends’ to exchange ideas and opinions, or pursue common causes and interests. Computer operation was once a career, but nowadays practically anyone can book travel and accommodation, communicate instantly with text, voice, or video, download or upload text, audio and video from or to websites, buy and sell, join groups, operate banking accounts—and the list is growing.
Meanwhile, concerns at not dealing with a physical ‘shopfront’ are vanishing, as consumers discover they can access quality services cheaply, sometimes even for free. Consequently, in just a few years, we have imperceptibly grown accustomed to transacting our business and even social lives though mouse clicks. The technology is inexpensive and transparent, and has opened up a brand new world full of possibilities… as long as we speak English, or one of the major languages.
Effectively, the web has drastically lowered the space/time barrier. The accessibility barrier (the cost of the hardware) also keeps falling. The language barrier, however, remains. As remarked earlier, the amount of content contributed by producers and users far exceeds the translation industry’s capacity to cope. Localisation is geared to producing quality output, but is relatively slow and only affordable to big players on big projects. It simply cannot keep pace with an environment that puts a premium on cheapness and speed.
For some twenty years, the industry made impressive progress on the back of TM. But just as the master/apprentice model collapsed under the weight of the mid 20th century scientific-technical revolution, the localisation model that subsequently emerged is failing now itself in the face of web-driven demand.
When the hard-drive was at the core of the computer revolution, the localisation industry had power and purpose. Now, with the personal computer becoming little more than a browser terminal, TM technologies and current localisation processes are not enough. Recent developments point to some trends that may play a big role in the translation of digital content as we enter into the next decade. For now, we can predict that TM will still have a role over the next decade, but mostly in support of new generation MT.
Free, unassisted MT
To become fully connected, planet web has a language barrier to break through — and on past performance, if the localisation industry is unable to help, it will do it on its own. Trying, as Yoda might say, it is.
The first attempt, so far, has been machine translation (MT), in the shape of web-based, fully automated MT such as that offered since the late nineties by Babel Fish, and more recently by Google Translator or Microsoft Windows Live. MT embodies the trinity of our brave new web world: free, instantaneous, and easy to use. In the latest versions, you can set your browser for Google Translate to produce a page in your language (if among the 13 languages / 29 language pairs now supported by its MT engine) at the click of a button. Similarly, if you are consulting an article in the Microsoft Knowledge Base that has not been translated, the page offers you a machine translated version, and asks for feedback. Laughable? You might be surprised. According to a study by Wendt (2008: cited in Clientside News by Dillinger & Gerber, 2009), Microsoft found little difference in usefulness ratings between the source in English and the MT version in Spanish, Portuguese and Arabic, with some machine translated articles into Chinese gaining a greater rating than the original. A study by Intel also reported in Clientside News (Gerber 2008) examines customer satisfaction with its own knowledge-base performance. The figures show 53% positive responses for the original English, with French, German, Italian and Turkish ‘human’ translations ranging between 34% and 40%. By comparison, raw Spanish TM output scored 43%. If these studies are reliable, they certainly call the current expenditure on translation QA into question.
It goes without saying that MT quality can be a somewhat elastic concept within certain limits, and depends on several variables: source processing, engine preparation, engine type (rule-based, statistical, or some kind of hybrid), language pair combination (Wilks 2005). Only in the most restrictive environments is it likely to produce output that is ‘publishable’ under old notions of quality, but with the speed we now desire, assessment is tempered by fitness for use: if users are satisfied with results, anything more is a waste of resources.
According to a recent TAUS (2009) report, automatic translate buttons in search portals get more than 50 million hits a day. The free MT model advanced by Google and other big commercial applications will certainly be maintained, since its market value does not depend on the selling of licences, but on advertising revenue from searching eyeballs (DePalma 2007). Its quality can only improve in the coming years, given already heavy research funding by the US Department of Defence (Bemish 2008), advances in retrieving useful bilingual text from the internet (see, for example, the Cross-language Information Retrieval model in Lu 2007), and the increasing amount of clean, TM-generated bilingual text that can be used to train its engines.
Bilingual seed data will of course keep growing in quantity and quality, driven by initiatives such the Translation Automation User Society (TAUS) and its Data Center to which significant language buyers (Adobe, Microsoft, Oracle, Sun Microsystems) and vendors (Lionbridge, Jonckers, SDL, Welocalize) have already pledged to contribute. All this will further propel the inexorable march of linguistic assets from hard drive to enterprise server to industry-shared repositories.
While unassisted, free MT can be useful for gisting purposes, it is still the general consensus that it is not yet up to the standard required to be used for dissemination. But this situation can be greatly improved when MT is properly assisted.
Beyond TM: MT-assisted TM
For now and the foreseeable future, stand-alone, unassisted MT is not yet the solution. However, the big players in localisation are already taking assisted MT very seriously indeed.
Back in March 2008, when Google had just launched its new SMT engine, Common Sense Advisory was proposing that LSPs should pre-process texts using Google Translate, and then decide whether to post-edit or discard and translate from scratch (DePalma 2008). So far no LSP has admitted to trying this, but there is no cause for embarrassment since this basic strategy is already attracting significant interest.
Using careful controlled authoring (now of course enhanced by authoring tools), customised MT engines with the most up-to-date glossaries and memories, and human post-editing, adherents believe MT is now reaching a stage where it can produce TM quality output faster than TM itself for many types of texts. Taking the best of both worlds, they propose a workflow whereby the source text is first pre-translated with TM, with remaining empty segments processed by MT. The human translator who would once have translated the incomplete segments now post-edits the final result.
Big language buyers (including Microsoft) tend to think the tipping point at which MT output will help rather than distract translators has been reached, and have commenced implementing such TM-assisted MT systems. There are still no empirical studies to measure its success, but this is the standard course of events nowadays in most fields: first the technology is developed, then applied, and only then will there be studies to inquire if that application made sense. That said, there is some interesting preliminary work in TM/MT that examines how translators deal with fuzzy matches versus machine-translated segments (Guerberof 2008; O'Brien 2006).
As we move into the next decade, what has begun as a pilot by a few heavyweights could well become mainstream for the whole of the localisation industry. A recent study commissioned by SDL found that of 40 of clients surveyed, 40 percent were likely to use MT ‘now’ for either technical documentation or support and knowledge-based content (SDLResearch 2008).
At the freelance level, MT output for translators using TM was hitherto deemed to be more distracting than helpful. At the beginning of the decade, some tools already came with MT plug-ins (Wordfast version 3, SDLX version 4, Trados version 5), but the concept did not find favour and was consequently neglected in subsequent versions. This is changing now. SDL Trados 2007 offered access to its in-house SDL Automated Translation feature in late 2008 (SDL 2008), and the feature remains, now with easier access, in the new SDL Trados Studio 2009. MultiTrans signed an agreement to offer access to the Systran MT engine early in 2009 (Multicorpora 2009).
From outside the traditional localisation industry, Google announced in June 2009 its Translator Toolkit, a web-based TM tool that went a step ahead in this direction by offering the filling in of target segments with Google Translate as default—translating from scratch rather than post-editing just an option. This started as a beta release with only English as a source language and no fuzzy matching, thus not suitable for professional translation. Seen as a ploy by Google to engage unpaid translators to provide parallel texts to feed into its SMT (Zetzsche 2009, van der Meer 2009), the Google Translator Toolkit illustrates, however, how fast MT is gaining ground.
Soon, if not already, professional translators in the localisation industry will no longer translate texts (like their literary counterparts) or segments (as in the TM heyday), but just post-edit machine output. Recent research comparing translating by post-editing MT versus translating directly from the source text showed the post-edited output was judged to be of higher clarity and accuracy if not of better style (Fiederer & O’Brien 2009). The new key figures will be the technical writers that produce the consistent controlled source text that ensures accurate TM results, and the data base managers and reviewers that tend the TM corpus.
The new model will continue improving its cost effectiveness by reducing demands on those doing the post-editing, and this will almost certainly see the deputising of competent bilinguals in place of professional translators. The present cycle will then presumably close, as translation loses its professional status and returns to its millennial amateur paradigm.
Beyond TM: Translation as a ‘utility’
The introduction of MT-assisted TM may help extract more productivity out of the traditional localisation mode. After all, TM has been its foundation, and this is the next step—albeit a radical one—in the path the technology has followed since its inception. While MT-assisted, TM will contribute to incrementally advancing productivity, it will only alleviate the problem rather than offer the necessary solution.
The advent of SaaS has provided cost savings and enhanced tracking systems that can manage translation with minimal human intervention, and LSPs are already making full use of it. But interestingly, this technology also offers the possibility of bypassing LSPs altogether. This concept has been under test for some time, for instance through portals such as ProZ (or Translators' Cafe, or Aquarius or others).
Yet web users typically expect more: they want translation to be as close to instantaneous and free as possible, and if the localisation industry can’t or won’t deliver, sheer demand almost guarantees someone else will… Livetranslation.com, for example, offers fast, small-volume, user-friendly human translation on-demand. Here the client posts a source text to the site and, with payment arranged, a translator-on-duty performs the translation and uploads it in the time taken to type it. It seems ideal for natural language applications beyond the present capabilities of unassisted MT. It may sound trivial, but Microsoft is taking it seriously enough, with plans to configure its Knowledge Base so that if users are unhappy with results from its un-assisted MT engine, they can access this premium ‘human’ service instead (Livetranslation 2008).
Indeed, this mode seems tailor-made for customer support / knowledge-based content, where the translator can be assisted with the corporation’s latest available terminology and memories, as well as for natural language applications like email/instant messaging which unassisted human translation could handle well. At least for the languages and areas of greater demand, one can easily imagine future translators being paid by the hour rather than the word, and working on-site under call-centre conditions rather freelancing from home.
TAUS has already named this trend ‘translation as a utility,’ but it might also be called ‘translation-on-tap,’ or ‘off-the-wall’ by analogy with public utilities such as water or electricity.
Beyond TM: ‘Hive’ translation
Late in 2007, the social networking site Facebook asked its bilingual users to translate their site for free, and succeeded. The Spanish, French and German sites became available by January 2008, and many others since. It required planning, certainly, and a sophisticated technical platform, but it was translation by amateurs—albeit bilinguals with a privileged knowledge of the subject matter. Facebook’s experiment came at a very critical moment when competing site MySpace was translating its own site by following the standard localisation processes, and Skyrock, Hi5 and others also seeking to grow outside the English market.
What Facebook actually did was crowdsource its translation, and on the strength of the results, if it had followed the usual localisation industry processes the task would probably have not been performed any faster or better. Crowdsourcing itself was around well before Wired popularised the term (Howe 2006) and had already been used in several other industries (see Kleemann, 2008). Big players have also crowdsourced particular pockets of content they thought could be suitable. Google in fact relied and keeps relying on crowdsourcing to translate its interface into many ‘minority’ languages. Not to mention the ‘Suggest a better translation’ feature through which Google Translate requests crowd contributions towards its SMT engine.
What is significant is that Facebook and all these other sites are commercial concerns that have appropriated an altruistic concept that originated in the free/open source software (FOSS) sector. Here, since there were no funds to localise, translation by volunteers was the only means. Indeed, much early research into computer assisted collaboration on linguistic tasks, even involving MT in its processes, was initiated within these FOSS areas (Murata 2003; Shimohata 2001).
Businesses are now looking at this development with great interest, as can be inferred by the attention generated in the professional press and consulting firms (Multilingual, ClientSide News, Common Sense Advisory, Byte Level Research, The Gilbane Report and others). TAUS calls this ‘community’ translation, and Common Sense Advisory dubs it CT3 (community translation + collaborative technology + crowdsourcing). We propose the term ‘hive’ translation, since the unbounded nature of cyberspace associations clearly transcends old notions of ‘community’.
Translators have not reacted much as yet, other than with the occasional complaint about amateur translation that writes the Spanish hacer as aser. However, it is precisely these obvious errors that will be quickly seized upon and corrected by other ‘hive’ members. Professional translators will have to deal seriously with collaboration in the next decade.
Even within the traditional localisation framework, in 2007 Common Sense Advisory was already proposing a need to replace the traditional translate-edit-proofread (TEP) print/Taylorist era model with a ‘collaborative translation’ model better suited to our instant communication era. Thus, translations would be undertaken in parallel rather than consecutively, with as many translators and subject matter experts as possible, while doing away with editing/proofreading roles, with the idea being to avoid mistakes from the outset rather than detect them at the end (Beninatto & DePalma 2008).
The typical role of the professional translator is further challenged in new scenarios currently under test. These projects include the use of wikis, as in the Cross-Lingual Wiki Engine (CLWE) introduced by Huberdeau, Paquet & Desilets (2008), which allows for content authoring and translation that do not rely on one master language or the use of professionally trained translators in environments no longer bound by tight coordination.
Professional translators post-2010
Translation as performed by the localisation industry is expensive and time consuming. The industry itself is being sidelined by technological advancement, and is proving slow to react. Change, as noted by Bower and Christensen (1995) can be incremental (from the inside) or disruptive (through the intervention of external forces), and Joscelyne and van der Meer (2007) have already given examples of how these two forces are shaping localisation into the next decade. The same forces are also recasting the role of professional translators.
Most translation done for localisation is likely to follow the MT-assisted TM model, with the translator thus becoming a de-facto post-editor. Some professional translators will still be needed to fill this role, and are still likely to work within the traditional TEP model under their current freelance status.
The ‘utility’ model could well cater for small projects, or projects in specialised areas. It would also employ professional translators using MT-assisted TM for texts written in some kind of managed authoring environment, or translating directly when dealing with the colloquial language of email and instant messaging. In a typical situation, the use of on-site resources will entail professional translators working in low-paid, call-centre conditions.
The ‘hive’ model does away with professional translators altogether in preference to a mass of volunteers/amateurs. This model brings back the pre-professional era when translators were simply bilinguals with good subject knowledge, and the ability or inclination to transfer meaning between languages. This model would be supported by a few professionally trained translators occupying key terminological or QC roles in the background.
One can easily imagine both ‘utility’ and ‘hive’ approaches merging, and volunteer bilinguals helping their fellow virtual ‘friends’ with the same goodwill as we might give directions to a foreign tourist in the street. Google’s forthcoming Wave offers the possibility of inviting a robot, Rosy (as in Rosetta Stone) to the conversation, and Rosy will machine translate words as they are being typed. For web interactions too complex for Rosy to handle, the requester could send a message that a volunteer could translate, with or without MT-assisted TM or payment. If Google is also the agent or provider of this additional service, it can clearly capture the data to improve Rosy’s next performance.
What place would the bulk of today’s professional translators occupy? This paper argues that, as soon as 2010, translation for localisation will be pushed into simple MT post-editing, while other sectors will see a shift toward call-centre conditions and a return of the amateur.
As internet becomes a true utility, translation is not the only profession to experience the stress of the digital age. Translators will still be needed, but their working conditions into the next decade will be quite dissimilar to those of the nineties.
References
- Bemish, Nicholas (2008). “Can MT really help the Department of Defense?” Paper presented at the The Eighth Conference of the Association for Machine Translation in the Americas (Waikiki, Hawaii, October 21-25).
- Beninatto, Renato S. and Donald A. DePalma (2008). “Collaborative translation.” Multilingual Resource Directory Editorial Index 2007, 49-51.
- Benis, Michael (2003). “Much more than memories.” ITI Bulletin, November-December, 24-29.
- Bower, Joseph L. and Clayton M. Christensen (1995). “Disruptive Technologies: Catching the Wave.” Harvard Business Review, 73(1), 43-53.
- Chan, Lung J. (2008). Information Economics, the Translation Profession and Translator Certification. PhD thesis. Universitat Rovira i Virgili, Tarragona (Spain).
- DePalma, Donald A. (2007). “Machine Translation Attracts Eyeballs, Not Software Revenue.” Global Watchtower, December 20.
- — (2008). “Google MT Puts Multilingual Information at More Fingertips.” Global Warchtower, March 25.
- Dillinger, Mike, and Gerber, Laurie (2009). ‘Success with Machine Translation. Automating Knowledge-base Translation, part 1’. ClientSide News, January, 10-11.
- Fiederer, Rebecca and Sharon O’Brien (2009). “Quality and Machine Translation: A realistic objective?” The Journal of Specialised Translation, 11, 52-74.
- Garcia, Ignacio (2007). “Power shifts in web-based translation memory.3 Machine Translation, 21(1), 55-68.
- Gerber, Laurie (2008). “Recipes for Success with Machine Translation: Ingredients for Productive and Stable MT deployments.” ClientSide News, November, 15-17.
- GoogleBlogoscoped (2008). “Google Translation Center, a New Human Translations Service in the Making.” On line at: http://blogoscoped.com/archive/2008-08-04-n48.html (consulted 01.08.2009)
- Gouadec, Daniel (2007). Translation as a Profession. Amsterdam/Philadelphia: John Benjamins.
- Guerberof, Ana (2008). “Post-editing MT and TM. A Spanish case.” Multilingual, 19(6), 45-50.
- Haynie, Mark (2008). “Enterprise Cloud Services: Deriving Business Value from Cloud Computing. White paper.” MicroFocus.
- Hennessy, Eileen B. (2008). “Navigating in a New Era: What Kind of Education and Training for Translators?” Translation Journal, 12(4).
- Howe, Jeff. (2006). “The Rise of Crowdsourcing.” Wired, June.
- Huberdeau, Luis-Philippe; Sebastien Paquet and Alain Desilets (2008). “The Cross-Lingual Wiki Engine: Enabling Collaboration Across Language Barriers.” Paper presented at the WikiSim2008 The International Symposium on Wikis, Porto (Portugal).
- Joscelyne, Andrew and Jaap van der Meer (2007). “Translation 2.0: Market Forces.” Multilingual, 18(1), 26-27.
- Kleemann, Frank; Gerd, Günter Voß, with Rieder, Kerstin (2008). “Un(der)paid Innovators: The Commercial Utiliza- tion of Consumer Work through Crowdsourcing.” Science, Technology and Innovation Studies, 4(1), 5-26.
- Livetranslation (2008). “Microsoft Partners with Live Translation to Provide Best of Both Worlds Translation Service.” On line at: http://livetranslation.com/PDFs/Live%20Translation_Microsoft_May08.pdf (consulted 01.08.2009)
- Lu, Chengye; Yue Xu and Shlomo Geva, (2007). “Improving translation accuracy in web-based translation extraction.” Proceedings of NTCIR-6 Workshop Meeting, Tokyo, Japan.
- McKay, Corinne (2006). How to Succeed as a Freelance Translator. Morrisville, NC: Lulu Enterprises.
- Multicorpora (2009). “Systran and MultiCorpora integrate technologies for increased translation quality and volume.” On line at: http://www.multicorpora.com/about/news/about_news_1971_en/ (consulted 01.08.2009)
- Murata, Toshiki; Mihoko Kitamura; Fukui Tsuyoshi and Tatsuya Sukehiro (2003). “Implementation of collaborative translation environment 'Yakushite Net'.” MT Summit, IX, 479-482.
- O'Brien, Sharon (2006). “Eye-Tracking and Translation Memory Matches.” Perspectives: Studies in Translatology, 14(3), 185-204.
- O'Reilly, Tim (2005). “What Is Web 2.0. Design Patterns and Business Models for the Next Generation of Software.” On line at: http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html?page=1 (consulted 01.08.2009)
- Samuelsson-Brown, Geoffrey (2004). A Practical Guide for Translators. Clevedon, Philadelphia, Adelaide: Multilingual Matters.
- SDL (2008). “SDL announces substantial increase in translation productivity with the new SDL Trados 2007 Suite.” On line at: http://www.sdl.com/en/events/news-PR/2008/[EDIT] (consulted 01.08.2009)
- SDLResearch (2008). “Trends in automated translation in today’s global business. White Paper.” On line at: http://www.sdl.com/en/globalization-knowledge-centre/whitepapers/ (consulted 01.08.2009)
- Shimohata, Sayori; Kitamura Mihoko; Tatsuya Sukehiro and Toshiki Murata (2001). “Collaborative translation environment on the web.” Paper presented at the MT Summit VIII, Santiago de Compostela, Spain, September 18-22.
- SIIA (2001). “Software as a Service: Strategic Backgrounder.” Software & Information Industry Association.
- TAUS (2009). “Edinburgh, March 25-27, 2009.” On line at: http://www.translationautomation.com/meetings/edinburgh-march-25-27-2009.html (consulted 01.08.2009)
- van der Meer, Jaap (2009). ‘Google translator Toolkit. What you don’t (want to) know’. On line at: http://translationautomation.com/technology/google-translation-toolkit.html (consulted 01.08.2009)
- Wallis, Julian (2006). “Interactive Translation vs. Pre-translation in the Context of Translation Memory Systems: Investigating the effects of translation method on productivity, quality and translator satisfaction.” MA in Translation Studies. Ottawa: University of Ottawa.
- Wassmer, Thomas (2003). “Comparative Review of Four Localization Tools: Deja Vu, MULTILIZER, MultiTrans and TRANS Suite 2000.” Multilingual Computing and Technology, 14(55).
- Wilks, Yorick (2005). Machine Translation: Its Scope and Limits. New York: Cambridge University Press.
- Zetzsche, Jost (2009). “The Google Translation Centre That Was to Be.” The Tool Kit. A computer newsletter for translation professionals, 142, June 9.
Biographical Note
Dr Ignacio Garcia is a Senior Lecturer at the School of Humanities and Languages, and member of the Interpreting and Translation Research Group, University of Western Sydney, where he teaches and researches translation technologies and localisation. He has widely published on these areas in academic journals. He is also a regular contributor to Multilingual, in which he reviews translation memory tools and writes on translation-memory-related matters. His current research projects deal with the revising of translation memory output and with the integration of translation memory and machine translation systems.
E-mail: i.garcia@uws.edu.au