Analytical assessment of legal translation: a case study using the American Translators Association framework
Mary Phelan, Dublin City University
ABSTRACT
A number of analytical grading systems for translation have been developed since the 1970s. The objective of the case study described in this article was to establish the suitability of that of the American Translators Association (ATA), to assessment of a legal translation. To this end, a judgment from the English court of appeal was translated into Spanish by a Translation Studies student. Ten assessors, all of whom were experienced translators and native speakers of Spanish and two of whom were also experienced ATA graders, applied the ATA framework for standardised error marking and the associated flowchart for error point decisions to the translation. Under this negative marking system, candidates must score under 18 points to pass. The subjective element in terms of decision making was such that assessors allocated total marks of between 9 and over 45 to the translation with three passing and seven failing the translation. Despite these results, in their feedback six assessors deemed the translation acceptable for professional purposes while four felt that it was unacceptable. Assessor feedback indicated that some error categories overlapped or were vague while the flowchart was difficult to implement, in particular when deciding the level of seriousness of errors.
KEYWORDS
American Translators Association (ATA), analytical method, assessment, legal translation.
One of the aims of the Qualetra project was to find a method for assessing legal translation that would be applicable to a range of language combinations, fast, reliable and inexpensive. One stream of the project, in which the author was a participant, involved taking different existing methods of assessing translation and applying them to legal translation and one of the methods chosen was the analytical approach (see Kockaert and Segers (2017) on the PIE method in this volume). A number of analytical approaches have been developed by various entities in the professional world in an effort to move away from the potential subjectivity of holistic translation assessment to a more objective, replicable system based on the identification of errors. The logic behind this approach was the fewer the errors, the better the quality. However, it proved difficult to move away entirely from a subjective approach. Existing analytical approaches include Sical from Canada, SAE J 2450 from the automobile industry, BlackJack (Secară 2005: 41), Lionbridge Translation Quality Index (Zearo 2005), and the American Translators Association framework for standardised error marking (Koby and Champe 2013: 167).
The Canadian Language Quality Measurement System (Sical), a quantitative based on the identification of hundreds of potential errors in a 400-word sample of text chosen at random, was developed in the 1970s. As Williams explains, the quality controller had to assess whether or not essential elements of a message had been rendered; if not, this would constitute a major error (Williams 2009: 8). As a result, the Sical system, despite a heavy emphasis on an objective quantitative approach, retained a subjective qualitative element. Williams also notes a lack of attention to macrotextual issues. While Williams (2001, 2004, 2009) has written extensively on this system, it is not available online.
A statistical system, the SAE J 2450 translation quality metric, was developed from 1997 onwards by the Society of Automotive Engineers in collaboration with General Motors to spot terminological errors in their specialised field. The metric later became a standard. According to Sirena (2004) the metric is based on a division into major and minor errors in seven categories: wrong term, syntactic error, omission, word structure or agreement error, misspelling, punctuation error and miscellaneous error. The score awarded is then divided by the number of words in the text, which facilitates comparison of different translations. Decisions around the weighting of errors as either major or minor are clearly subjective as is the miscellaneous error category. The system is applied by the supplier and, when applied correctly and in association with the General Motors terminology glossary, has reduced the amount of time spent on review processes after completion of translations. For General Motors, it is essential that translations meet a ‘customer satisfaction threshold’ and the metric has saved them a great deal of time and money. However, such a threshold may well be too low for legal translation where accurate transformation of detailed information may well be vital.
ITR International Translation Resources Ltd (acquired by Capita in February 2016) developed software called BlackJack which is based on 21 possible errors linked to specific error weights. For example, poor expression in the target language translation would correspond to 4 whereas a misinterpretation of the source language text would correspond to 6. The implementation of this system appears quite straightforward because the assessor does not have to come to a decision on the gravity of each error (see Secară (2005) for more detailed information). The Qualetra project partners were interested in applying this model to legal translation but unfortunately, production of the commercial software version of BlackJack has been discontinued.
Lionbridge have developed a translation quality index, which focuses on seven areas: accuracy, terminology, language quality, style guide, country standards, formatting, and client specific. Errors can be minor, major or critical and a maximum of 10 error points is allowed (Zearo 2005). Yamagata Europe's QA Distiller is a software application that can detect errors such as missing brackets, double spaces and incorrect numbers. The software can be customised to ensure that terminology is used consistently. While it is a useful tool, it is not actually designed to assess translations.
The American Translators Association (ATA) has developed a framework for standardised error marking which is used to assess translations by ATA members who wish to become ATA certified translators. Information on the system is freely available on their website. To date, there is no quantitative system for examining the quality of legal translation. This study applies the ATA method to the assessment of a translation of a legal text from English to Spanish in order to establish if it is appropriate for this specialised area.
The ATA system
Candidates who wish to become ATA certified translators are required to translate two short passages of between 225 and 275 words. They must do a general translation and a semi-specialised text. The ATA assessment system is quite a complicated system that merits detailed explanation. At first glance it appears entirely objective as it is based on the quantification of errors where errors attract points and candidates must score under 18 points to pass. However, more detailed analysis reveals a number of subjective elements. The ATA framework for standardised error marking (2009) lists three categories of translation errors. The first category consists of errors that concern the form of the exam, for example, if a translation is unfinished or illegible or if the translator gave more than one option for a particular word or phrase. The second category consists of transfer errors that have a negative impact on understanding. The list of possible transfer errors consists of 13 items: mistranslation, misunderstanding of the source text, additions, omissions, word choice, register, faithfulness, literalness, false friends, cohesion, ambiguity, style and an open category entitled other. The third category consists of mechanical errors such as grammar, syntax, punctuation, spelling, accents, capitalisation, word form and usage. All three categories appear in the ATA analytical framework or grid (figure 1). Category 1 and 3 errors appear reasonably straightforward while category 2 errors may be more open to interpretation, particularly when it comes to style and the open category entitled other. A major difficulty with the ATA system is that the assessor has to decide how serious each error is. For example, if the translator provided more than one option for the translation of a particular item, the assessor has to decide how serious the impact is and can decide to impose 1, 2, 4, 8 or 16 points. In the case of category three mechanical errors, the maximum amount of points that can be imposed for an error is 4. A total of three points can be awarded to quality aspects of the translation, which, according to the ATA website, could include:
- choice of a particularly felicitous word or phrase;
- exceptionally skilful casting of a sentence or sentences;
- target-language rendition that precisely mirrors ambiguity in source text.
These points do not appear in other information for graders, which meant that the assessors in this study could decide for themselves on what constituted a quality point. While it is an interesting innovation to award quality aspects, something that is absent from other systems outlined above, it is unclear why the maximum amount of points that can be awarded is merely 3. While a score of 18 or more in the ATA system is a fail, graders are permitted to stop counting errors once they reach a score of 46. As assessors go through the translation, they complete the grid in Figure 1:
ATA CERTIFICATION PROGRAM Exam Number:
FRAMEWORK FOR STANDARDIZED ERROR MARKING Exam Passage:
Version 2009 Check here if for Review q
1 | 2 | 4 | 8 | 16 | Code | Reason |
Errors that concern the form of the exam | ||||||
Treat missing material within the passage as an omission | UNF | Unfinished (if a passage is substantially unfinished, do not grade the exam) | ||||
ILL | Illegibility | |||||
IND | Indecision, gave more than one option | |||||
Translation/strategic/transfer errors: Negative impact on understanding/use of target text | ||||||
MT | Mistranslation (use a subcategory if possible) | |||||
MU |
|
|||||
A |
|
|||||
O |
|
|||||
T |
|
|||||
R |
|
|||||
F |
|
|||||
L |
|
|||||
FA |
|
|||||
COH |
|
|||||
AMB |
|
|||||
ST |
|
|||||
OTH |
|
|||||
Mechanical errors: Negative impact on overall quality of target text. Points may vary by language. Maximum 4 points | ||||||
G | Grammar | |||||
SYN |
|
|||||
P | Punctuation | |||||
SP/CH | Spelling/Character (usually 1 point, maximum 2, if more than 2 points, another category must apply) | |||||
D | Diacritical marks/ Accents | |||||
C | Capitalization | |||||
WF/PS | Word form/ Part of speech | |||||
U | Usage | |||||
OTH | Other (describe) | |||||
0 | 0 x 2 =0 | 0 x 4 = 0 | 0 x 8 = 0 | 0 | Column totals | |
A grader may stop marking errors when the score reaches 46 error points | A grader may award a quality point for each of up to three specific instances of exceptional translation | Quality points are subtracted from the error point total to yield a final score. A passage with a score of 18 or more points receives a grade of Fail. | ||||
Total error points (add column totals): 0 | Quality points (maximum 3) 0 | Final passage score (subtract quality points from error points) 0 |
Figure 1: ATA Framework for standardised error marking
To help assessors identify errors, the ATA provides an explanation of the different categories. For example, cohesion is explained as follows:
Cohesion: (COH): A cohesion error occurs when a text is hard to follow because of inconsistent use of terminology, misuse of pronouns, inappropriate conjunctions, or other structural errors. Cohesion is the network of lexical, grammatical, and other relations which provide formal links between various parts of a text. These links assist the reader in navigating within the text. Although cohesion is a feature of the text as a whole, graders will mark an error for the individual element that disrupts the cohesion. (ATA explanation of error categories)
To help assessors decide on the gravity of each error, they are provided with the flowchart in Figure 2:
Figure 2: Flowchart for error point decisions
Three overall questions can be seen on the bottom left hand corner. The purpose of the questions is to guide the decision-making process and the questions are:
1. Can target text be used for intended purpose?
2. Is target text intelligible to the intended target reader?
3. Does the target text transfer the meaning of the source text?
Assessors start at the top of the flowchart and for each error they spot they decide first of all if it is a mechanical error, in which case they work down the left hand side of the chart. If it is a transfer error, they work down the right hand side of the chart. Each time the assessor locates an error, she has to decide first of all, if the usefulness of the target text is affected. If this is the case, then it is a problem of translation transfer or strategy error. If the effect on understanding or use or content is negligible, zero points are deducted. If the effect is merely slight, one point is deducted. If the interference is minimal, two points are imposed. If disruption is limited in scope four points are imposed. If the text as a whole is still usable despite a serious error, eight points are imposed. However, if the text is not usable, 16 points are imposed.
In the case of mechanical errors where the understanding or usefulness of the target text is not affected and the error is not apparent to any editor, zero points are deducted. If the error is apparent to a typical target reader but the text is still intelligible, one point is deducted. If the error does not require effort from the typical reader in order to understand the text, two points are deducted. However, if the error does require effort, four points are deducted.
The ATA certification examination is held in an examinations centre and translations can be handwritten or, since 2016, typed on the candidate’s laptop. The exam lasts three hours and candidates are permitted to consult hard copy and online dictionaries and reference materials. However, they are not permitted to use email, translation forums, chat rooms, machine translation, translation memories or CAT tools. Two graders go through each translation. If they do not agree on marks, a third or even a fourth grader may be asked for their input. Results are given on a pass or fail basis only and candidates are not provided with their scores. The pass rate is twenty per cent. For more detailed information on ATA certification see Koby and Champe (2013) and the ATA website. The ATA also has a PDF document entitled ‘Into-English Grading Standards’ to help certification candidates prepare and to help graders assess.
As detailed information on the ATA system of assessment was freely available, a case study was carried out to investigate whether or not it could be suitable for the specialised area of legal translation.
The source text
The source text was a judgment from the Court of Appeal Criminal Division in England and was translated into Spanish by a student at Qualetra partner Alcalá de Henares University in Spain. The translation was carried out solely for the purpose of the Qualetra project and was used for (a) the study of analytical assessment described in this article and (b) a study of holistic assessment The judgment would fall into the category of a legal procedural document, a document that has ‘a conventional structure and tenor’ and a translation of which could be required not only by a defendant but also by court officials (Ortega Herráez et al. 2013: 106).
Judgments often begin with an introduction where ‘the nature of the issue is identified and the parties are introduced together with a brief explanation of their legal relationship’, followed by the facts, ‘in which the principal matters of fact assumed, admitted or proved are set out as coherently and succinctly as possible’ and are laid out in numbered paragraphs (Alcaraz Varó and Hughes 2002: 113). The judgment in this study differs from this description in that rather than providing explicit information on the nature of the issues and the parties involved, it launches straight into the facts.
The original source text consisted of 532 words, reduced to 252 words for this study to correspond to the usual length of translations in the ATA system. The source text contains abbreviations such as EWCA (court of appeal England and Wales) and QC (Queen’s Counsel). There are also titles such as Lord Justice and Recorder. One act, the Proceeds of Crime Act 2002, is mentioned. The crimes mentioned are conspiracy to launder the proceeds of crime and money-laundering. Typical terms connected with trials that appear in this judgement are trial, judgement, conviction, sentenced, sentences, leave to appeal, co-accused, lesser sentences, concurrently. The source text does not contain any very long sentences and much of the information is quite straightforward. The source and target texts are included as an appendix.
The assessors
It was necessary to recruit a different group of assessors for this study from the group used in the parallel study of holistic assessment of the same, but longer version, of the text.
Ten assessors, all native Spanish speakers, were provided with the source and target texts. Five were members of APTIJ, the Spanish professional association of court and sworn interpreters and translators (Asociación Profesional de Traductores e Intérpretes Judiciales y Jurados); two were professional members of the Irish Translators’ and Interpreters’ Association; and two were personal contacts. In addition, it was decided that it would be important to include two ATA graders who have experience in using the system. Each assessor was paid €50.
The assessors were all experienced translators with between five and 20 years of experience in the field. The mean amount of translation experience was 12.6 years.
Not all translators had experience of assessing translations. Some made the point that they regularly assessed their own work. Others were involved in revision and proofreading and were regularly exposed to work by other translators. Of the 10 translators, six had experience of assessing while four had no experience.
In addition to doing the assessment using the ATA method, assessors were asked if they would accept the translation for professional purposes; what was their general impression of the translation; and what they thought of the ATA method of assessing translation. Their responses to these questions will be discussed below.
The results
As the translation was complete and legible, it did not contain any category 1 errors. That left two categories, transfer errors and mechanical errors.
If we take the total number of marks we find that, as shown in chart 1, there is a great deal of divergence with marks ranging from 9 to 45 (note that graders are permitted to cease checking and counting once they get to 45 points). Seven assessors reached totals greater than 18 thus failing the translation. These included assessors 1 and 2, the two ATA assessors, who agreed that the translation would fail on the ATA certification system. However, three assessors gave scores of 9, 16 and 17, thus deciding that the translation deserved to pass.
Chart 1: total marks awarded by the ten assessors
These results are interesting because the grid system is so detailed that it gives the impression that it is very objective. However, the difficulty lies in the application and it can be tricky for assessors to decide first of all on the exact category of error and secondly to decide how serious that error is. The ATA formerly acknowledged this difficulty on their website where they stated that “Although the use of points may impart a certain impression of objectivity, it is in truth still subjective” (also Doyle 2003: 26).
Assessors had the option of rewarding positive aspects of the translation a maximum of 3 points. However, six assessors did not find any positive points while of the remaining four, two awarded 1 point, one awarded 2 points and one awarded 3 points. Once again, the subjective element came into play.
Assessment of additions and omissions
While the source text consists of 256 words, the target text is 358 words, an increase of 40% that can be partially explained by the retention of English terms in the translation along with their translations in Spanish and by additional explanations in Spanish. The ATA Explanation of Error Categories draws on Humbley et al. (1999: 139) to explain the addition category as follows:
Addition: (A): An addition error occurs when the translator introduces superfluous information or stylistic effects. Candidates should generally resist the tendency to insert “clarifying” material. Explicitation is permissible. Explicitation is defined as “A translation procedure where the translator introduces precise semantic details into the target text for clarification or due to constraints imposed by the target language that were not expressed in the source text, but which are available from contextual knowledge or the situation described in the source text.”
There was considerable divergence in scores relating to additions:
7 |
8 |
6 |
1 |
0 |
3 |
0 |
8 |
1 |
3 |
And in relation to omissions:
6 |
10 |
2 |
0 |
0 |
0 |
3 |
12 |
0 |
1 |
Items omitted include Recorder, QC, his Honour, Lord Justice. Additions include items such as referencia oficial, ‘official reference’,which was added before what looks like an official reference number. Such an addition can be risky, particularly in legal translation.
Two assessors had differing views on one of the additions made by the translator. Assessor 3 commented on what she considered to be appropriate additions where the translator had added information to make the information clearer. The first case was a mention of ‘this country’ meaning England or Wales, where the translator took the decision to add Gran Bretaña, ‘Great Britain’, in brackets:
Source text | Target text | Back translation |
He lived sometimes in this country and sometimes in Spain. | El solicitante tiene 44 años de edad y alternaba su residencia entre este país (Gran Bretaña) y España. | The applicant is 44 years of age and alternated his residency between this country (Great Britain) and Spain. |
Assessor 2 took a different point of view on this very point and commented that ‘Most of the errors are not too serious and some of them may not be errors in ‘real life’ (for example, the addition of británico and Gran Bretaña)’. The word británico, ‘British’, had been added twice, in one case to an explanation of the Central Criminal Court:
Central Criminal Court | Central Criminal Court, (el tribunal central británico responsable de delitos penales) | Central Criminal Court, (the central British court responsable for criminal offences) |
The inclusion of the name of the court in English acts as a reminder to the reader that it is an English court and thus obviates the need to repeat this information again in the Spanish explanation in brackets. The distinction between assessing a translation for certification purposes and looking at a translation in ‘real life’ is an interesting one but one has to wonder if it is really valid; if the addition of británico, ‘British’ is acceptable in real life, then why not accept it in a test situation.
Assessor 3 also approved of the following example, where the translator converted pounds sterling to euro, something that translators are often advised not to do although the ATA does not penalise such conversions unless they are incorrect (ATA FAQs):
In all more than £500,000 passed through his two accounts at the Halifax which were the proceeds of crime. | En total, ingresó más de 500 000 libras (cerca de 609 000 euros) en las dos cuentas bancarias que tenía en Halifax, dinero procedente de sus delitos. | In all, he lodged more than 500000 pounds (about 609000 euros) in the two bank accounts that he had in Halifax, money that was the proceeds of his crimes. |
In fact, the two ATA assessors did penalise this addition by deducting two points.
The translator’s strategy was to keep the original names of courts and laws but this strategy was applied somewhat inconsistently. In the case of the Central Criminal Court, as we have just seen, the name in English was retained in the text and followed by an explanation in Spanish in brackets. In the case of the Proceeds of Crime Act, the details are translated into Spanish, with the addition of the word británica and then followed by the name of the Act in English. There was an inconsistency in the approach but the strategy was well-intentioned in that the aim was to explain as much as possible and to provide information so details could be checked by speakers of English if necessary. This type of inconsistency would probably be spotted in a holistic assessment, but in the ATA framework, it would have to go under the category ‘Other’ which was used by just one assessor, assessor 9, for an abbreviation. Most assessors did not penalise the retention of information in English but many objected to the addition of británica in Spanish along with the translation of a law:
On count 6 of the trial indictment, for money laundering contrary to section 327(1) of the Proceeds of Crime Act 2002, to four years' imprisonment | Por el cargo sexto del escrito de acusación, blanqueo de capitales en contra de lo establecido en el artículo 327, apartado 1, de la ley británica de Prevención del Blanqueo de Capitales, aprobada en 2002,(section 327(1) of the Proceeds of Crime Act 2002), a cuatro años de prisión | On the sixth charge of the indictment, money laundering contrary to article 327, section 1, of the British law on the Prevention of Money Laundering, passed in 2002, (section 327(1) of the Proceeds of Crime Act 2002), to four years in prison. |
Acceptability
As outlined above, ten assessors applied the ATA system to the translation with seven failing it and three awarding a pass. All were asked to give their opinion on whether or not they felt that the translation was acceptable for professional purposes; four assessors felt that the translation was unacceptable, while six, including two who had designated it a fail under the ATA system, felt that it was acceptable, or could be acceptable if some changes were implemented. As indicated by Prieto Ramos, the application of such notions as adequacy, suitability, appropriateness, and the term used here, acceptability, “are presented in very general terms and their application thus depends on the judgement of translators and revisers” (2014: 14). While this is true, it is interesting to hear the views of professional translators on the topic of what is acceptable and what is not. Assessor 9, who gave the translation 28 points, said she would not accept the translation. Assessor 6, who gave the translation 29 points, would accept the translation but with changes, so after a revision. While the ISO 17100: 2015 standard requires that reviews be carried out by a second person, in reality, not all translations go through a revision process. In any case, the work submitted by translators for revision should be to as high a standard as possible. Assessor 1, who gave 43 points, acknowledged some positive aspects but found the accumulation of errors an issue that ultimately made the translation unusable:
Assessor 1: On first reading, I thought it was pretty good. It reads well and has some elegant solutions. However, upon more careful scrutiny many small errors revealed themselves. While none of these mistakes is catastrophic, their aggregate effect renders the translation unusable, in my opinion (although this would in fact depend on what the intended purpose is).
Assessor 2 (45+ points) expressed a similar opinion:
Assessor 2: It struck me as a pretty good translation on a quick, first reading. However, after a careful analysis, there appeared too many errors, to the point of reaching the barrier of 45 error points, after which we stop counting. […] as a whole, I think this translation is unusable for professional purposes.
Assessor 8 (45+ points) took the view that the translation could be acceptable in certain circumstances:
Assessor 8: I personally would not accept it as a professional translation because I see that it does not have the quality standards I would like to find in a translation (this opinion is from a translator's point of view). I would say, though, that this translation could be "acceptable" in some instances. Aside from a few major errors, the text transfers the information although not in a professional way. Legal professionals often deal with documents that are not translated by specialists and I am afraid it is something they are "used" to.
Assessor 3 alluded to the urgency aspect:
Assessor 3: Yes, I would accept it as I would probably have paid for it and I would be pressed for time, as is usually the case when somebody gets a legal document translated. I would probably have doubts about its quality and if that was the case I would look for clarification from the agency or the translator. I might not use the same agency or translator the following time.
Assessor 7 made an interesting point about the official version:
Assessor 7: it is not a perfect translation, but the target reader knows the situation and is able to complete the inaccurate information. Besides the official version is the English one, not the Spanish one; otherwise, I would not accept this translation for professional purposes.
Assessor 10 looked at it from the point of view of a defendant:
Assessor 10: It is acceptable even if it is not completely accurate and seems to be a translation. The effect on a non-native real life defendant would be the same as on a native real life defendant who does not understand legalese.
Acceptability, as acknowledged here by some of the assessors, depends on the purpose of the translation and its target audience.
Assessor feedback on using the ATA framework and flowchart
For eight assessors, it was their first experience of using the ATA grid. They found the process of working out how to use it quite difficult with the flowchart presenting particular difficulties:
Assessor 3: It took me a long time to get my head around the grid. I found it especially hard to evaluate whether the impact of a mistranslation was slight, minimal, etc. Assigning value to different errors is, obviously, very useful, but not easy.
Assessor 5: I had more trouble with the flowchart but it did help me clarify how to assess some particular errors.
Assessor 10: It is the first grid I've ever seen to spot and categorize errors. In my opinion, having a roadmap is a good method (or the least bad method) but it is not that easy to follow the flowchart.
Assessor 7: The guidelines for using the grid method could be clearer. More than one reading is necessary to know how to do the assessment. In my case, the overall questions of the flowchart have been unhelpful. I have taken my decisions according to the three questions included in the flowchart.
Assessor 8: I find the grid method an acceptable means although it would need more elaboration. It is too vague in some instances; for example, in the guidelines for assigning point decisions the boundaries in the scale are not clear, I could not really tell the difference between deducting 1 and 2 points in the right part of the flowchart (translation/transfer/strategy error).
Assessor 6 was more positive:
Assessor 6: It takes some time to understand it, but once you apply it, it makes sense. I see no area for improvement, and, in fact, will use it as a reference for future revisions.
The lack of guidance on quality points was brought up by assessor 4 who also expressed positive views on the grid approach:
Assessor 4: I think the use of the above grid for assessing translations is a very comprehensive and thorough method of recording errors, although the marking of ‘quality points’ is a bit less detailed.
Another issue was the possibility of over-penalising mistakes, as explained here:
Assessor 8: I have also found that certain paragraphs that present more than one error category are heavily penalized; i.e. paragraphs with both syntax and punctuation errors can cause a cohesion error and when we treat them as separate errors we are over-penalizing the text.
A similar point was made by assessor 7:
Assessor 7: When there is more than one error in the same segment, the assessor does not have any instructions.
Assessors 5 and 7 also commented on the subjectivity of the approach:
Assessor 5: In some other cases I found that the flowchart can be quite subjective — hence not as systematic as it could be?
Assessor 7: At first glance, the system seemed to me quite objective, but it is not. The assessor has to choose the classification of the error and then the mark.
It is likely that with practice, many of these problems could be surmounted. Unlike ATA graders, the assessors in this study did not have the opportunity to do a trial run during which they could learn how to apply the framework, get feedback on their approach and discuss issues. They did not have access to grader workshops offered by the ATA Certification Committee. They were working on their own and had to work out the system as best they could.
Suitability of the ATA framework for legal translation
Malcolm Williams (2009: 5-7) has identified eight potential problems or issues in translation quality assessment (TQA). They are:
(a) The evaluator
(b) Level of target language rigour
(c) Seriousness of errors of transfer
(d) Sampling v full-text analysis
(e) Quantification of quality
(f) Levels of seriousness of error
(g) Multiple levels of assessment (and how an overall quality rating can be given)
(f) TQA purpose or function – what is the TQA for?
The evaluators in this study are all translators rather than end users. Issues relating to target language rigour such as style did not emerge in the current case study. The seriousness of errors of transfer was an issue with some assessors experiencing difficulty in decision making on this issue. Sampling v full-text analysis is a potential issue because the ATA system is designed for short texts. If a longer text is used, could the graders adjust the pass mark and the mark beyond which they stop grading? For example, for a 500-word text would the pass mark be 36 and would graders stop grading when they get to 90 points? Alternatively, would a sample be sufficient to decide on the quality of the whole translation? Williams’ point about the quantification of quality is an interesting one and relates to the difference (if any) between a translation that barely meets the criteria for a quality piece of work and one that just misses out. In the ATA framework context, this would be a translation that is awarded 17 points, therefore a pass, compared to a translation that is awarded 18 points, therefore a fail. The two translations may be very similar, but one is a pass and the other a fail. One has to wonder if this magic number of 18 really can be applied to all translations. In the case of a translation that is being used for certification purposes, as in the case of the ATA, the choice of text is very important because if the source text is very straightforward, candidates will make fewer errors and will pass the exam. Likewise, if the source text is excessively complex, nobody will pass. When assessing translators with a variety of language combinations, the task of ensuring that all translations contain a similar level of difficulty becomes even more challenging. We have already seen difficulties around decision-making on the level of seriousness of errors. The ATA system is comprehensive and allows for the provision of an overall quality rating. In this case study, the purpose of the TQA or translation quality assessment is to establish the suitability or otherwise of the ATA system for assessment of legal translation.
As explained above, the 13 transfer error categories used in the ATA system are: mistranslation, misunderstanding of the source text, additions, omissions, word choice, register, faithfulness, literalness, false friends, cohesion, ambiguity, style and an open category entitled other. Are all these categories applicable to legal translation?
Assessor 3 found that the categories of misunderstanding, mistranslation and false friends were vague and overlapped with each other and recommended that all could be subsumed into the terminology/word choice category.
In relation to false friends, Alcaraz Varó and Hughes suggest that a formal register and the influence of Latin mean that the problem of false cognates “is especially acute in texts of this type”, i.e. in legal texts (2002: 173). Leo Hickey suggests that “bulky glossaries” of false friends between Spanish and English legal translation could be compiled and provides some examples including English ‘sentence’ which corresponds to condena or pena in Spanish, while Spanish sentencia corresponds to ‘decision,’ ‘verdict’ or ‘decree’ in English (2013: 131).
Assessor 3 also suggested that some of the ATA categories are useful to an error analysis of legal translation. In particular, she noted that literalness can be a common approach by early career translators who prefer to play safe and have not as yet built up the confidence and experience to contemplate moving away from the source language text. The nature of legal language and the obscurity of some legal texts mean that the temptation to stick close to the original is even greater. Indeed, literalness is an occasional feature of the translation under discussion here as in this example:
The applicant was sentenced by Judge Wide QC as follows | su señoría el juez Wide condenó al solicitante de acuerdo con lo siguiente | His lordship Judge Wide sentenced the applicant in accordance with the following : |
Assessor 3 suggested that de acuerdo con lo siguiente (‘in accordance with the following’) is too literal and too vague.
Ambiguity is also an important characteristic of legal translation and the challenge often is to preserve the ambiguity in the translation. This feature is included in the ATA system and is defined as follows:
Ambiguity: (AMB): An ambiguity error occurs when either the source or target text segment allows for more than one semantic interpretation, where its counterpart in the other language does not. (ATA explanation of error categories)
With minor changes, the ATA framework could be adapted to legal translation.
Conclusion
This study has limitations in that it focuses on only one translation carried out by a student. Eight of the assessors had not undergone training in how to apply the ATA system and had to work out how to do so themselves based on the framework and flow chart. While ATA practice is that two assessors assess translations, the assessors in this case study were working on their own. Therefore, the results are not generalisable and further work would need to be carried out, perhaps on a number of translations carried out by professional translators.
Despite these limitations, the translation was assessed by ten assessors and the results are of interest, particularly the subjective element as demonstrated by the disparity in marks given by the assessors even for something that seems as straightforward as an addition or an omission. There were also difficulties to do with implementation; while the framework in the form of the grid (Figure 1) is fairly straightforward, the flowchart for error point decisions (Figure 2) is quite confusing, and, as pointed out by some of the assessors, it is quite difficult to differentiate between transfer errors that have a slight impact, those that involve minimal interference and those that are limited in scope. The openness of the system to individual interpretation or subjectivity explains why the ATA has opted for the labour intensive approach of allocating two graders, plus a third or even a fourth in the case of disagreements. At first sight the ATA framework gives the impression of being an analytical approach with very little possibility of subjectivity on the part of assessors. Despite this, as demonstrated in this article, it turned out to be quite subjective when implemented.
The framework could be adapted slightly and applied to legal translation but the variety of legal translation (judgments, contracts, letters of request, statutes, adoption papers, divorce papers, extradition requests) is such that the framework would perhaps need to be refined further for specific texts, something that would rather defeat the purpose of this analytical approach. For example, if a translator mistakenly types an incorrect date or spells a name incorrectly on the translation of a birth certificate, such a single small mistake would be totally unacceptable.
One of the difficulties in assessing a legal translation such as the one used in this study is the lack of guidelines in how best to carry out legal translation. As Leo Hickey says:
One advantage – and disadvantage – in this context, at least in the United Kingdom, is that there is practically no quality control, no feedback, and we seldom see others work or others’ work. So we just do whatever we think best (2013: 124-125).
We have seen some evidence of this in relation to the issue of additions above. Translators and assessors may disagree about the most appropriate approach to various translation issues that are specific to legal translation.
Bibliography
- Alcaraz Varó, Enrique and Brian Hughes (2002). Legal Translation Explained. Manchester: St Jerome.
- Doyle, Michael Scott (2003) “Translation Pedagogy and Assessment: Adopting ATA’s framework for Standard Error Marking.” ATA Chronicle Nov-Dec 21-28. www.atanet.org/chronicle-online/2p-contents/uploads/2003-November-December.pdf (consulted 22 Dec. 2016).
- Humbley, John, Geoffrey S. Koby and Sue Ellen Wright (1999). “English Terminology.” Delisle, Jean, Hannelore Lee-Jahnke and Monique C. Cormier (eds). Translation Terminology. Amsterdam and Philadelphia: John Benjamins.107-212.
- Hickey, Leo (2013). “Translating for the Police, Prosecutors and Courts: the Case of English Letters of Request.” Anabel Borja Albi and Fernando Prieto Ramos(eds). Legal Translation in Context — Professional Issues and Prospects. Bern: Peter Lang, 123-141.
- Koby, Geoffrey S. and Gertrud G. Champe (2013). “Welcome to the Real World: Professional Level Translator Certification.” Translation & Interpreting 5(1), 156-173.
- Kockaert, Hendrik J. and Winibert Segers (2017). “Evaluation of Legal Translations: PIE method (Preselected Items Evaluation.” The Journal of Specialised Translation 27.
- Ortega Herráez, Juan Miguel, Cynthia Giambruno and Erik Hertog (2013). “Translating for Domestic Courts in Multicultural Regions: Issues and New Developments in Europe and the United States of America.” Anabel Borja Albi and Fernando Prieto Ramos (eds) (2013). Legal Translation in Context – Professional Issues and Prospects. Bern: Peter Lang, 90-121.
- Prieto Ramos, Fernando (2014). “Quality Assurance in Legal Translation: Evaluating. Process, Competence and Product in the Pursuit of Adequacy.” International Journal for the Semiotics of Law11-30.
- Secară, Alina (2005). “Translation Evaluation – a State of the Art Survey.” Proceedings of the eCoLoRe/MeLLANGE Workshop, Leeds. Translation Studies Abstracts, Manchester: St. Jerome Publishing, 39-44.http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.126.3654&rep=rep1&type=pdf (consulted 22 Dec. 2016).
- Sirena, Don (2004). “Mission Impossible: Improve Quality, Time and Speed at the same time.” Globalization Insider 13(2.2), http://www.translationdirectory.com/article387.htm (consulted 16 Dec. 2016).
- Williams, Malcolm (2001). “The Application of Argumentation Theory to Translation Quality Assessment.” Meta. Translators’ Journal 46(2): 326-344.
- --- (2004). Translation Quality Assessment: An Argumentation-Centred Approach. University of Ottawa Press.
- --- (2009). “Translation Quality Assessment.” Mutatis Mutandis 2(1): 3-23.
- Zearo, Franco (2005). “Measuring Language Quality with the Translation Quality Index (TQI) A Real World Case Study of Language QA at Lionbridge.” PowerPoint presentation https://www.hitpages.com/doc/6299964766420992/1#pageTop (consulted 22 Dec. 2016).
Websites
- American Translators Association. “ATA certification program FAQs.” http://atanet.org/certification/aboutexams_faqs.php (consulted 30 Nov. 2015).
- American Translators Association. “Framework for Standardised Error Marking Explanation of Error Categories” https://www.atanet.org/certification/aboutexams_error.php (consulted 22 Dec. 2016)
- American Translators Association (2013). “ATA Certification Program. Into-English Grading Standards.”http://www.atanet.org/certification/Into_English_Grading_2013.pdf (consulted 16 Dec. 2016).
- ISO 17100: 2015. “Requirements for translation services” https://www.iso.org/obp/ui/#iso:std:iso:17100:ed-1:v1:en (consulted 22 Dec. 2016).
- QA Distiller http://www.qa-distiller.com/en (consulted 16 Dec. 2016).
- SAE International http://www.sae.org/standardsdev/j2450p1.htm (consulted 16 Dec. 2016).
- Sical Public Works and Government Service Canada https://goo.gl/1hjVN2 (consulted 25 Sept. 2015).
Biography
Mary Phelan lectures in translation studies at the School of Applied Language and Intercultural Studies at Dublin City University, Ireland, and is chairperson of the Irish Translators’ and Interpreters’ Association. She was involved in the Qualetra project, which was funded by the European Commission.
E-mail: mary.phelan@dcu.ie
Appendix - The Translation
Regina v A No: 2012/6688/A6 Court of Appeal Criminal Division 22 May 2013 [2013] EWCA Crim 976 Before: Lord Justice Elias Mr Justice Openshaw The Recorder of Liverpool His Honour Judge Goldstone QC (Sitting as a Judge of the CACD) Wednesday, 22 May 2013 Representation Mr D Armstrong appeared on behalf of the Applicant. Judgment Mr Justice Openshaw: 1 On 30th October 2012 at the Central Criminal Court, following his conviction after a trial, the applicant was sentenced by His Honour Judge Wide QC as follows. On count 1, conspiracy to launder the proceeds of crime, to eight years imprisonment. On count 5 of the trial indictment, for fraud, to one year's imprisonment. On count 6 of the trial indictment, for money laundering contrary to section 327(1) of the Proceeds of Crime Act 2002, to four years' imprisonment. These sentences were ordered to run concurrently, making eight years in all. He now renews his application for leave to appeal against sentence following refusal by the single judge. Various co-accused were convicted of other offences and received lesser sentences. 2 The applicant is aged 44. He lived sometimes in this country and sometimes in Spain. He paid no tax over the relevant period in either country. In all more than £500,000 passed through his two accounts at the Halifax which were the proceeds of crime. One account was used to buy three different properties which were nominally transferred on to others including co-defendants. Mortgage repayments for these properties were again funded by crime. (256 words) |
SENTENCIA ENTRE REGINA v A NÚMERO: 2012/6688/A6 Court of Appeal Criminal Division (Tribunal de Apelación, Sala de lo Penal) 22 de mayo de 2013 [2013] EWCA Crim 976 Ante: Miércoles, 22 de mayo de 2013 Representación: El señor D Armstrong se personó en representación del Solicitante. Fallo: El magistrado Openshaw: 1. El 30 de octubre de 2012, en Central Criminal Court, (el tribunal central británico responsable de delitos penales), después de ser declarado culpable tras un juicio, su señoría el juez Wide condenó al solicitante de acuerdo con lo siguiente. Por el primer cargo, conspiración para el blanqueo de capitales, productos de un delito, se le condenó a ocho años de prisión. Por el cargo quinto del escrito de acusación, fraude, a un año de prisión. Por el cargo sexto del escrito de acusación, blanqueo de capitales en contra de lo establecido en el artículo 327, apartado 1, de la ley británica de Prevención del Blanqueo de Capitales, aprobada en 2002,(section 327(1) of the Proceeds of Crime Act 2002), a cuatro años de prisión. El juez ordenó que dichas penas fueran concurrentes, sumando un total de ocho años de prisión. El solicitante reanuda ahora su petición para solicitar una apelación contra dicha condena tras haber sido denegado por el juez. Varios de los otros acusados fueron condenados por otros delitos a penas más cortas. 2. El solicitante tiene 44 años de edad y alternaba su residencia entre este país (Gran Bretaña) y España. No pagó los impuestos correspondientes en ninguno de los dos países durante el período en cuestión. En total, ingresó más de 500 000 libras (cerca de 609 000 euros) en las dos cuentas bancarias que tenía en Halifax, dinero procedente de sus delitos. El dinero de una de las cuentas se empleó para la compra de tres propiedades diferentes, las cuales se transfirieron a nombre de otros, incluidos otros acusados. Los préstamos hipotecarios de dichas propiedades fueron, de nuevo, financiados con capital procedente de sus delitos. |