RSS feed

Conceptualising translation revision competence: A pilot study on the ‘tools and research’ subcompetence

Isabelle S. Robert, Ayla Rigouts Terryn, Jim J.J. Ureel, and Aline Remael, TricS Research Group, University of Antwerp

ABSTRACT

Translation revision is an important step in the translation workflow. However, translation revision competence remains ill-defined. After identifying what is understood by ‘revision’ in a translation context and discussing the theoretical translation revision competence (TRC) model previously designed by the authors, this article analyses and interprets the results of an empirical pilot study designed to test the presence of the tools and research subcompetence hypothesised in the TRC model. An experiment with 21 master-level translation and/or language students was carried out: the experimental group was given revision training as a form of treatment and the control group was not. The TRC subcompetence under investigation was tested adopting a pretest–posttest experimental design. Both groups performed four controlled revision tasks and their revision process was keylogged. The results, subjected to quantitative statistical analyses, show that revisers and translators use the same tools, as hypothesised, but that they use these tools differently.

KEYWORDS

Keystroke logging, ‘tools and research’ subcompetence, translation revision competence, translation revision.

1. Introduction

Translation revision, that is, reading a human translation to “identify features of the draft translation that fall short of what is acceptable [...] and make any needed corrections and improvements” (Mossop 2014: 115), is a compulsory component of the translation process, at least for translation service providers (TSPs) who want to be certified according to the European standard EN 15038 for translation services (European Committee for Standardization 2006) and its ‘successor’, the ISO 17100:2015 Translation Services Management (International Organization for Standardization 2015). The ‘reviser’ is someone other than the translator, who also checks his/her own translation, but this step is called ‘checking’ and is carried out before revision proper.

Nowadays, translation revision (TR) is becoming a common practice: translators do not simply ‘translate’ anymore. With the advancement of computer-assisted translation (CAT) tools, such as translation memories (TMs), translators are given a target text (TT) which they can accept, modify or reject. Consequently, many translators are actually ‘revising’. Besides, workbenches often integrate machine translation (MT), which means that translators become ‘post-editors,’ revising MT output. Finally, with the advancement of crowdsourcing translation, more and more translation jobs involve revising amateur or MT output (Declercq 2014).

In this context, traditional views on translation competence (TC) are challenged in that it seems that future or even current translators do not only need TC, but also translation revision competence (TRC) and even, in the case of MT, post-editing competence (PEC) (Barabé 2013; Declercq 2014; Garcia 2011; Pym 2013). However, research is falling behind practice and solid research foundations are urgently needed in the quickly evolving translation profession. Indeed, TC has been a major research topic in Translation Studies, in particular since the 1990s, with a “significant amount of literature dealing with the definition of TC” (Schäffner 2012: 31) whereas TRC has seldom been addressed as a research topic, except by a handful of researchers such as Hansen (2009) or Künzli (2006).

As Hansen (2009) and Biel (2011) state: if translation revision is expected of translators, it may be beneficial to include translation revision in translator training programmes. However, what precisely should translation students be taught to develop the skills required of competent revisers? Are there any competences that translators do not necessarily have and revisers need? The EN 15038 standard and ISO 17100:2015 seem to imply that any skilled and experienced translator should be qualified to revise. However, does this mean that experience is the only factor that distinguishes revisers from translators? In her comparison of the concepts of translation and revision, Hansen (2009: 274) states that “translation revision seems to require additional skills, abilities and attitudes, and/or enhanced levels of competence in certain areas.” Hansen’s conceptual definition closely resembles what Mossop (1992) had already stated in his description of the goals of a revision course for translation students, that is, that the ability to justify changes is a crucial step towards becoming a better reviser and that, for translators, it is crucial to “achieve the mental switch from a ‘retranslating’ to a ‘revising’ frame of mind” (Mossop 1992: 82). In addition, Mossop underscored the significance of interpersonal skills in his handbook on revising and editing for translators (Mossop 2001, 2007, 2014), as did Horguelin and Brunette (1998) in their revision handbook. Likewise, Künzli (2006) agrees that the acquisition of interpersonal competence should constitute an important focal point in courses on translation revision, a point of view recently confirmed by researchers focusing on peer review in translation teaching (Lisaité, Vandepitte, Maylath, Mousten, Valdez, Castel-Branco and Minacori 2016).

In summary, there appears to be a consensus among researchers that translation revision competence (TRC) does indeed share various subcompetences with translation competence (TC). However, researchers also agree that there are some fundamental differences between the two constructs and that TRC consists of additional subcompetences. To our knowledge, no attempt has been made yet to construct a TRC model, based on empirical research, to organise and to define the subcompetences that translation revision entails. Therefore, Robert, Remael and Ureel (2016) started that process by creating a potential TRC model, based on existing TC models, and announced empirical research to confirm or reject − either completely or partly – hypotheses related to the subcompetences of TRC. The pilot study presented below is the first step towards that goal and was conducted within the context of a one-year research project (October 2014–September 2015) at the University of Antwerp1. It will serve as the stepping-stone to future research, which will involve both quantitative and qualitative research with students and professionals.

Before we highlight what TRC entails, it is vital that we address possible terminological fuzziness with regard to the terms ‘translation revision’ and ‘competence’ to avoid confusion. Consequently, Section 2 of this paper is dedicated to terminology. In Section 3, Robert, Remael and Ureel’s TRC model (Robert et al. 2016) is introduced, including the TC models that inspired the TRC model, with a more detailed description of the subcompetences specific to translation revision. Section 4 addresses research methodology and provides a thorough description of the research design and the materials used. In Sections 5 and 6 respectively, the results and the conclusion are presented, together with the limitations of the study, and questions for further research.

2. Terminology: Translation revision and competence

In an earlier publication, Robert (2008) highlighted the inconsistent operationalisation that the concept of translation revision had (until then) suffered from. Before her, Künzli (2005: 32) had also noted how the few studies on translation revision had revealed some “terminological confusion.” This terminological issue has been discussed more thoroughly by Robert et al. (2016). Therefore, we will remain brief in this respect.

In Translation Studies, the text to be revised is generally a translation, which is the case in this study, and the person revising is someone other than the translator. This type of revision is what we consider translation revision proper. Revision by the original translator (i.e., self-revision) is not investigated here, although it is often a focal point in translation-process research. Because translation revision should be done by a person other than the original translator and because revision is a form of quality control, it should also be done after translation and self-revision, but before the translation is delivered to the client.

Furthermore, we believe that revision proper implies the revision of the entire translation. To sum up, we stand by what was written in Robert et al. (2016: 4): “The term revision should apply only to the revision of a translation by a reviser who is someone other than the original translator, who revises the translation entirely before it is delivered to the client.” In addition, we define ‘revising’ as does Mossop (2014: 115): it is “to identify features of the draft translation that fall short of what is acceptable [...] and make any needed corrections and improvements.”

Similar terminological confusion applies to the concept of ‘competence,’ as noted by Robert et al. (2016). In disciplines such as applied linguistics, the construct of competence is often operationalised as “a roughly specialized system of individual and/or collective abilities, proficiencies, or skills that are necessary or sufficient to reach a specific goal” (Weinert 2001: 45, cited by Lesznyák 2008: 31). Like Lesznyák (2008: 49), we consider translation competence (TC) to involve “all the skills and knowledge that contribute to the successful completion of a translation task.” Therefore, we define TRC as all the skills and knowledge that contribute to the successful completion of a revision task.

3. TRC model
3.1 Inspiration from TC models

The TRC model as presented by Robert et al. (2016) is based mainly on two existing TC models: (1) PACTE’s TC model (2000, 2003, 2005, 2008, 2009, 2011a, 2011b, 2014, 2015; Hurtado Albir 2015, 2016) and (2) TransComp’s TC model (Göpferich 2008, 2009, 2013; Göpferich & Jääskeläinen 2009). These two models were selected as a foundation for our TRC model for two main reasons: (1) they have been empirically tested (and testing is still ongoing) and (2) they are complementary. For example, the TransComp model includes the translation routine subcompetence and provides more details about factors that can influence the translation process, as will be explained below. A third source of inspiration for the TRC model is the reference framework used in the European Master’s in Translation (EMT) partnership project (EMT Expert Group 2009), which describes competences for professional translators and experts in multilingual and multimedia communication and which includes revision and translation competences.

The PACTE group defines translation competence as

the underlying system of knowledge needed to translate. It includes declarative and procedural knowledge, but the procedural knowledge is predominant. It consists of the ability to carry out the transfer process from the comprehension of the source text to the re-expression of the target text, taking into account the purpose of the translation and the characteristics of the target text readers. It is made up of five sub-competencies (bilingual, extra-linguistic, knowledge about translation, instrumental and strategic) and it activates a series of psycho-physiological mechanisms. (PACTE 2005: 58)

In addition to the five subcompetences, psycho-physiological components are also included. Although PACTE’s TC model is the most tested TC model to date, it is still open to improvements. For example, Kelly (2005), added ‘interpersonal competence’ as a separate subcompetence.

The other TC model used as a source of inspiration for our TRC model is TransComp’s TC model (Göpferich 2009), which explores the development of translation competence by means of a longitudinal study. The study was based on a model developed by Göpferich, the project leader and principal investigator, and was inspired by PACTE’s model. TransComp’s TC model consists of six subcompetences, which roughly correlate with the subcompetences in the PACTE model: (1) communicative competence in at least two languages, (2) domain competence, (3) translation routine activation competence (which is new compared with PACTE's model), (4) tools and research competence and (5) psycho-motor competence (which was implicitly included in PACTE) and (6) strategic competence. As in the PACTE model, these subcompetences are combined and controlled by strategic competence, to which TransComp adds motivation. There are three additional factors that determine the use of the subcompetences: (1) the translation brief and translation norms, (2) the translator’s self-concept and professional ethos and (3) the translator’s psycho-psychological disposition. The TransComp group have not focused on the validation of their model as such, that is, of the TC model’s components, since the model served as a framework. Rather, the group have investigated the development of TC. Their results so far suggest that the more complex strategic subcompetence does not develop until less complex subcompetences have reached certain threshold values (Göpferich 2013: 74). Research has focused mainly on subcompetences considered to be specific to TC, that is, the tools and research, strategic and translation routine activation subcompetences.

The last model taken into consideration was developed by the EMT expert group, set up by the Directorate-General for Translation (DGT) in April 2007. The six subcompetences proposed by the DGT are all interdependent and, as a whole, they lead to the qualification of experts in multilingual and multimedia communication. They make up the minimum requirements, but do not exclude other competences that may be required. The six subcompetences are (1) language competence, (2), intercultural competence, (3) info-mining competence, (4) technological competence, (5) thematic competence and (6) translation service provision competence. These six subcompetences combined create a model that is similar to PACTE’s and TransComp’s TC models (see Robert et al. 2016).

Finally, as explained in Robert et al. (2016), the TRC model is also based on insights from research on revision practice or revision training by researchers such as Brunette (2002, 2007, 2013), Hansen (2009), Hernandez-Morin (2009), Robert and Brunette (2016) and Schjoldager, Wølch Rasmussen, Thomsen (2008).

3.2 Translation revision competence (TRC)

Based on the TC models above and existing (albeit limited) research into TRC, Robert et al. (2016) created a TRC model, which includes the following nine subcompetences (Figure 1):

  • Four subcompetences which are known from TC models and which are expected to be the same in both TC and TRC models: (1) bilingual subcompetence, (2) extralinguistic subcompetence, (3) knowledge about translation subcompetence, (4) translation routine activation;
  • Two subcompetences which are also inspired by TC models but are thought to be only partially similar to their counterparts in TC models: (5) tools and research subcompetence, (6) interpersonal subcompetence, and
  • Three subcompetences specific to revision: (7) knowledge about revision subcompetence, (8) revision routine activation subcompetence and (9) strategic subcompetence for revision.


Figure 1. Robert et al. (2016) TRC Model

In addition, three factors that determine and control the use of all subcompetences are included: (1) translation and revision norms and briefs, (2) the translator’s and reviser's psycho-physical dispositions and (3) translator’s and reviser’s self-concept or professional ethos. Since Robert et al. (2016) provide detailed definitions of each subcompetence and of all three factors of their TRC model, only the subcompetence investigated in this pilot study (tools and research subcompetence) has been reproduced below in a summarised version (see Table 1).

Subcompetence

Operational definition

Tools and research subcompetence

Predominantly procedural knowledge related to the use of translation- and revision-specific conventional and electronic tools.
Definition based on PACTE (2003) and Göpferich (2009)

Table 1. Operational definitions of revision subcompetences in pilot study

Accordingly, the following hypotheses were formulated about the tools and research subcompetence2:

      • Hypothesis 1: Translators and revisers use the same tools.
      • Hypothesis 2: Compared with translators, revisers use the same tools, but in a different way:
        • 2a: they spend more time in resources;
        • 2b: they use the same tools more frequently;
        • 2c: they combine more resources per problem-solving process.

Section 4 highlights the methodological considerations taken into account to test the hypotheses formulated above.

4. Methods

To verify the hypotheses about the nature of TRC, we used a pretest–posttest design, with an experimental group and a control group. Two data-collection tools were used: (1) four revision tasks (two pretest revision tasks and two posttest revision tasks) for product analysis, and (2) the keystroke logging software program Inputlog (Leijten and Van Waes 2013) for process analysis.

4.1 Participants

The participants were 21 students in the final semester of a one-year language-related master’s programme. The experimental group consisted of 12 participants, who were tested before and after attending a course on revision and editing, which was an elective course in the Master’s in Translation programme at the University of Antwerp, Belgium. The control group consisted of 9 participants, who participated in the pretest and posttest without taking the revision and editing course. The revision and editing course lasted one semester (two hours/week, 13 weeks, from February 2015 to May 2015) and the students in the course received both lectures and practical assignments on translation revision.

Of the 21 participants, 16 were students in the Master’s in Translation programme (12 in the experimental group, 6 in the control group), while the remaining three participants were students in the Master’s in Linguistics or the Master’s in Linguistics and Literature programmes. All details are summarised in Table 2. All participants were native speakers of Dutch.

In brief, all participants were ‘translation or language trainees’ and ‘translation revision trainees’ and, thus, not professional translators or revisers. Although our hypotheses should ideally be tested with professionals, this was not feasible within the scope of this pilot study3. However, in further research, professionals will be included. For the sake of convenience, we will speak of ‘translators’ and ‘revisers’ in this contribution.

Programme

Experimental group

Control group

Total

Master’s in Translation

12

6

18

Master’s in Linguistics

0

1

1

Master’s in Linguistics and Literature

0

2

2

Total

12

9

21

Table 2. Participant profiles

4.2 Material

The participants, who all gave their informed consent to take part in the experiment, were presented with four revision tasks, divided equally over the pretest and the posttest. As announced above, we used keystroke-logging software to collect process data, in particular, data about the tools and resources used in the revision process.

4.2.1 Pretest and posttest revision tasks

The texts used for the four revision tasks were four press releases. The target language (TL) for the first pretest task (Text 1) and for the first posttest task (Text 3) was Dutch (i.e., the participants’ L1) and the source language (SL) was either English or French (whichever the participants were most proficient in). For the second pretest task (Text 2) and the second posttest task (Text 4), the TL was English, French or German and the SL was Dutch for all participants.

Before starting the revision work, participants were given a revision brief for each task. The instructions stated that the participants’ revisions would be published immediately after being submitted. In other words, the participants were expected to deliver a final version of the text, without any comments or changes visible. For the first revision tasks (translation into Dutch) in both pretest and posttest (Texts 1 and 3), the revision brief stated that the participants had to revise everything. For the second revision tasks (translation from Dutch, Texts 2 and 4), the participants were asked to revise only language and style4.

Because time pressure is an important aspect of professional revision, the participants were given a limited amount of time to work on each task. Mossop (2014) suggests a speed of 600−750 words/hour for bilingual revision (which we expected for Texts 1 and 3) and 1000−1250 words/hour for monolingual revision (which we expected for Texts 2 and 4). This meant that the participants were given 35 minutes for the first task of each test (Texts 1 and 3) and 25 minutes for the second task of each test (Texts 2 and 4). For these tasks, a monolingual revision procedure was expected, drawing on the specifications in the revision brief. A summary of the revision tasks is offered in Table 3.

Task details

Pretest

Posttest

 

Task 1

Task 2

Task 1

Task 2

Source language(1)

E/F

D

E/F/G

D

Target language(1)

D

E/F

D

E/F/G

Revision level

Everything

Language/style

Everything

Language/style

Time limit2

35

25

35

25

Text type

Press release

Press
release

Press release

Press
release

(1) Dutch (D), English (E), French (F), German (G)

(2) in minutes

Table 3. Summary of revision tasks

The participants carried out the revision tasks in MS Word, on computers equipped with internet access and electronic dictionaries. The participants were also given electronic versions of the source texts and paper versions if they requested this. Since the participants had all worked on the computers before, their performance was not negatively affected by any unexpected environmental factors.

4.2.2 Keystroke-logging software

Inputlog, the keystroke-logging tool used in this study, was developed at the University of Antwerp (Leijten and Van Waes 2013). It logs writing processes in experimental settings and guarantees a high degree of ecological validity. The program logs all keyboard and mouse events in every Windows environment (e.g., MS Word). This means that researchers know, for example, which dictionaries have been used, which words have been looked up and which websites have been consulted. Like most logging tools, Inputlog is relatively unobtrusive. In addition, it allows participants to work in their usual word processor, which is generally MS Word. This is an advantage compared with other tools that work exclusively in a particular interface.

As explained by its developers Leijten and Van Waes (Leijten, Van Waes, & Van Horenbeeck, 2015), Inputlog 6.0 features five modules: (1) Record, (2) Pre-process, (3) Analyse, (4) Post-process and (5) Play. Since we mainly used the Record and the Analyse modules, we will not report on the other modules (see Inputlog.net for more information). The Record module is used to start the recording of the writing or, in our case, of the revision process. When the recording is stopped, two files are automatically generated: an IDFX file, which is used as a basis for all analyses in the Analysis module, and a Wordlog file with the final version of the written task, in our case, the revision. Inputlog offers 14 different analyses, such as the General analysis (an XML file, in which every line represents an input action), the Summary analysis (an XML file with an overview of the basic statistics about produced words and sentences, pausing behaviour, etc.), the Source analysis (an XML file with analyses of the sources used) or the Revision analysis (a Revision Matrix or linear representation, in which revisions are listed, together with some basic time and position data).

To test our hypotheses about the tools and research subcompetence, that is, “Translators and revisers use the same tools” (H1) and “Compared to translators, revisers use the same tools in a different way” (H2a & H2b), we used the Source analysis, which offers a ‘Window Statistics’ overview showing which windows have been activated, for how long (total time in seconds and relative), and how many keystrokes were produced in that Window (total and relative). For example, the participant whose revision process is represented in Figure 2, used the Van Dale dictionary for 234.904 seconds or 3.9 minutes, which is 11% of the total time for the task. To test our last hypothesis about the tools and research subcompetence, that is, “Compared to translators, revisers use the same tools in a different way: they combine more resources per problem-solving process” (H2c), we used the General analysis to link each tool used to a specific ‘item’. This ‘item’ or ‘rich-points’ method was also used successfully in another revision research context, to investigate revision procedures (see, for example, Robert 2012, 2013; Robert and Van Waes 2014).


Figure 2. Example of a Window Statistics overview generated by the Revision analysis module in Inputlog

5. Results

All statistical tests were non-parametric tests, since the sample size was rather small. Results are reported as recommended by Field (2009: 550–558).

5.1 Translators and revisers use the same tools (H1)

As explained in Section 4, Inputlog was used to trace the tools used by the participants: the Source Analysis file provided a list of these tools. Below, we will report on the tools used by all participants in the pretest and the posttest in the L1 (Texts 1 and 3). We will not include the pretest and the posttest in the L2 (Texts 2 and 4) because there were three possible L2s (English, French, German), which makes comparisons more difficult and further reduces group sizes.

The tool that was used by almost all participants in both the pretest (95% of the participants) and the posttest (90% of the participants) was Google Search and/or Bing. In separate analyses, we see that the experimental group used that tool even more than the control group: all members of the experimental group made use of that tool in the pretest as well as in the posttest, compared with 89% and 78% respectively for the control group.

The second most popular tool was the Van Dale dictionary (standard bilingual dictionary for Dutch, in combination with English, French, German and other languages not included in this study). The dictionary was used by 86% of the participants in both the pretest and posttest. In the pretest, it was used by 75% of the participants in the experimental group and by 100% of the participants in the control group. In the posttest, we noticed the opposite: the dictionary was used by 100% of the participants in the experimental group and by 67% of the participants in the control group.

Other popular tools were Wikipedia, online dictionaries and specific websites for the Dutch language. Wikipedia was used by 43% of all participants in both the pretest and posttest. In the pretest, Wikipedia was used more by the control group than the experimental group (33% vs. 56%), whereas the opposite was observed in the posttest (58% vs. 22%). Online dictionaries (e.g., bilingual dictionaries, thesauruses, idioms dictionaries) were used more in the posttest than the pretest (52% vs. 33%), and more in the posttest by both groups (experimental group from 25% to 50%, control group from 44% to 56%). As far as specific websites about the Dutch language are concerned, an increase in use was also observed among all participants taken together (29% in pretest vs. 48% in posttest), but the increase is due to the experimental group (33% in pretest versus 75% in posttest) and not to the control group, which showed a decrease in use (22% to 11%). It has to be noted that the experimental group attended a special lecture dedicated to websites about the Dutch language so the increase in the use of such websites was expected. Other tools (e.g., online translation sites, Linguee, IATE) were used, but such use was rare.

In conclusion, it can be said that translators and revisers use the same tools, since we found no tool that was not used at all by one group, while being used by the other.

5.2 Compared to translators, revisers use the same tools in a different way (H2)
5.2.1 Revisers spend more time in resources than translators (H2a)

Although we had hypothesised that translators and revisers would use the same tools, which they did, we expected that they would use the tools differently. As explained before, the Source Analysis file in Inputlog provides for a duration analysis of the use of each tool. To avoid a high number of categories, we used the following typology: Van Dale dictionaries, Google/Bing searches, Internet activity (Google/Bing excluded, but including diverse websites about the Dutch language or related to the topic of the text, translation websites, etc.), compared with the time spent in the target text.

In the pretest, there was no between-group difference concerning the time the participants spent in the target text, in Van Dale dictionaries, on the internet, or in all resources taken together. It has to be noted that the time spent in the ST was not taken into account, since some participants used the electronic version of the ST while others used the paper version. However, in the posttest, there were between-group differences in all categories, with the experimental group spending significantly less time in the target text, and significantly more time in the resources, as compared with the control group. As far as within-group comparisons are concerned, the experimental group spent less time in the TT in the posttest than in the pretest, but the difference was not significant. However, the difference in time spent in Van Dale and Google was significant, it was not for the Internet, but it was significant for all resources taken together. For the control group, there were two significant differences: the time spent in the TT (more in the posttest) and in the resources taken together (less time in the posttest). All descriptive statistics are summarized in Table 4 and the test statistics in Tables 5 and 6.

Our hypothesis related to the time spent in the TT and in resources is thus confirmed: revisers spend more time in resources than translators. However, it should be noted that, as explained in Section 4.1, we worked with trainees who received revision training when enrolled in the experimental group. In other words, it is probable that the result related to the time spent in sources can be attributed to the training, and in particular to the part on revision principles, where trainees learnt that only necessary changes that they can justify should be made in a translation. In other words, conscious of that necessity, trainees have taken the time to go and check aspects related to their changes, hence the difference with the control group. The question is, therefore, whether we would find the same results when comparing professional translators with professional revisers, which is the aim for future research (see Section 6).

  Experimental group Control group
Pretest Mean Median Mean Median
Target text 72.50 78.00 78.0 82.00
Van Dale dictionaries 5.58 4.50 5.67 4.00
Google/Bing search 7.83 6.00 8.78 7.00
Internet 7.42 6.00 6.89 6.00
All resources 20.83 19.00 21.22 18.00
Posttest        
Target text 65.92 64.50 85.89 89.00
Van Dale dictionaries 12.08 12.00 3.78 3.00
Google/Bing search 10.42 7.50 4.44 3.00
Internet 10.92 10.00 5.11 2.00
All resources 33.67 35.00 13.44 9.00

Table 4. Descriptive statistics - Time distribution (in %)

  Target text Van Dale Google/
Bing
Internet All resources
Pretest          
U 41.0 53.5 46.5 52.0 52.0
Z -.926 -.036 -.538 -.143 -.142
P .186 .493 .306 .451 .451
Posttest          
U 9.0 12.0 21.5 26.0 9.0
Z -3.203 -3.001 -2.316 -1.995 -3.205
P .000* .001* .010* .023* .000*
r
Effect size
-.69
Large
-.65
Large
-.50
large
-.43
medium
-.69
large

Note: *=significant at the .05 level (exact sig. 1-tailed). Effect sizes are only reported for significant results.
Table 5. Between-group comparisons of time spent in TT and resources (Mann-Whitney)

  Target text Van Dale Google/
Bing
Internet All resources
Experimental group
T 21.00 7.50 12.50 14.50 5.50
Z -1.412 -2.477 -1.825 -1.646 -2.630
P .088 .005* .036* .053 .003*
r
Effect size
  -.71
Large
-.52
large
  -.75
large
Control group
T 6.00 11.00 10.00 7.00 6.00
Z -1.956 -1.368 -1.481 -1.187 -1.958
P .025* .102 .082 .148 .012*
r
Effect size
-.65
large
      -0.65
large

Note: *=significant at the .05 level (exact sig. 1-tailed). Effect sizes are reported only for significant results.
Table 6. Within-group comparisons of time spent in TT and resources (Wilcoxon)

5.2.2 Revisers use the same tools more frequently than translators (H2b)

We also aimed to trace the number of times a particular tool was used. To do so, we used the General Analysis file generated by Inputlog. Again, we will concentrate on the pretest and posttest in the L1 (Texts 1 and 3) below. The number of times a Van Dale dictionary was used was counted, as well as the number of Google/Bing searches. Even with the General File, it was difficult to count the number of times a particular website was used. Consequently, we decided to concentrate on operations that could be clearly identified and counted, that is, Van Dale and Google or Bing searches.

As far as Van Dale dictionaries are concerned, we also distinguished between the combination French–Dutch or English–Dutch on the one hand (SL into TL) and the combination Dutch–French and Dutch–English (TL into SL) on the other hand. Although the combination SL–TL was expected to be much more frequent, we know from experience that students also double-check some terms, using TL–SL dictionaries.

In the pretest, there were no between-group differences, but in the posttest, there were significant between-group differences for all tests, that is, the number of times the Van Dale SL–TL was used, the number of times the Van Dale TL–SL was used, and the number of Google or Bing searches: the experimental group used these tools significantly more frequently than the control group.

As far as within-group differences are concerned, significant differences were observed within the experimental group, with an increase in the posttest with respect to the use of the TL–SL Van Dale dictionary and the use of Google/Bing, but not in the use of the SL–TL Van Dale dictionary. In the control group, no significant differences were observed. All descriptive statistics are summarised in Table 7 and the test statistics in Tables 8 and 9.

  Experimental group Control group
  Mean Median Mean Median
Pretest        
Number of searches in Van Dale Source-Target (VD ST) 5.42 4.00 4.89 4.00
Number of searches in Van Dale Target-Source (VD TS) .92 .50 1.00 0.00
Number of Google or Bing searches 8.92 7.50 10.78 7.00
Posttest        
Number of searches in Van Dale Source-Target (VD ST) 8.83 9.00 3.33 2.00
Number of searches in Van Dale Target-Source (VD TS) 7.42 4.50 .67 .00
Number of Google or Bing searches 15.92 12.50 7.00 5.00

Table 7. Descriptive statistics – Tools use frequency

  Pretest     Posttest    
  VD ST VD TS Google/
Bing
VD ST VD TS Google/
Bing
U 52.50 44.00 51.00 23.50 9.50 21.50
Z -.107 -.792 -.214 -2.186 -3.228 -2.313
p .467 .220 .424 .014* .000* .010*
r
Effect size
      -.47
large
-.70
large
-.50
large

Note: *=significant at the .05 level (exact sig. 1-tailed). Effect sizes are reported only for significant results.
Table 8. Between-group comparisons of the use of Van Dale dictionaries and Google/Bing Searches (Mann-Whitney)

  Experimental group Control group
  VD ST VD TS Google/
Bing
VD ST VD TS Google/
Bing
T 13.50 1.50 4.00 8.50 6.00 12.00
Z -1.067 -2.808 -2.749 -1.334 -.412 -1.245
P -156 .001* .002* .098 .406 .119
r
Effect size
  -.81
Large
-.79
Large
     

Note: *=significant at the .05 level (exact sig. 1-tailed). Effect sizes are only reported for significant results.
Table 9. Within comparisons of the use of Van Dale dictionaries and Google /Bing Searches (Wilcoxon)

Consequently, we can conclude that translators and revisers use the same tools. However, revisers not only spend more time in all the resources that they use, they also use these tools more frequently than translators.

5.2.3 Revisers combine more resources per problem-solving process than translators (H2c)

To test this hypothesis, the number of times a particular tool was used for one and the same item (see Section 4.2.2) was determined using the General file analysis in Inputlog. It should be noted that the analysis concentrated on items only, which means that only problem-solving processes related to items were taken into account. The results reported apply to all items which were detected, that is, (1) the items for which a proper correction was made, (2) the items for which an inadequate correction was made and (3) the items for which there was no visible correction, but of which we know they were detected, since a relevant search operation could be traced in the General file.

For all the detected items, we determined the number of times no search operation was conducted (no tool used, ‘0 search operation’), 1 search operation was conducted (1 tool used, ‘1 search operation’), 2 or more search operations (2, 3 or 4 tools, the maximum observed being 4; this process will be referred to as ‘multiple search operation’ below). Subsequently, we calculated the number of items for which there were zero, one or multiple search operations respectively (for each participant) and we related these numbers to the number of items participants had detected.

The result is a percentage of items with 0, 1, or a multiple search operation for each participant, and thus, three scores per participant. Means were calculated for each test and each group, and statistical tests were conducted. The descriptive statistics are summarized in Table 10.

  Experimental group Control group
Pretest Mean Median Mean Median
0 search operation 71.33 69.70 74.89 73.33
1 search operation 19.08 16.99 11.65 6.67
Multiple search operation 9.58 6.07 13.46 11.76
Posttest        
0 search operation 65.16 62.91 72.22 70.00
1 search operation 22.63 21.82 24.13 22.22
Multiple search operation 12.21 13.33 3.65 0.00

Table 10. Descriptive statistics – Combination of resources

As far as between-comparisons are concerned (Table 11), no significant difference between the experimental group and the control group was observed in the pretest. In the posttest, however, one significant difference related to the score for multiple search operations (2 or more tools for one and the same item) was observed between the two groups: the experimental group scored higher than the control group, which means that in the posttest, the experimental group conducted more multiple search operations than the control group did. As far as within-group comparisons are concerned (Table 12), no difference was observed within the experimental group between pretest and posttest, but one difference was observed within the control group with respect to the percentage of multiple search operations, which is significantly lower in the posttest.

Consequently, we cannot conclude that the experimental group combined more resources (multiple search operations) than the control group or, in other words, that revisers combine more resources than translators. However, the analysis was limited to items only, and thus, did not include all problem-solving processes of each participant. Therefore, further analysis is necessary.

  Pretest     Posttest    
  0 search 1 search Multiple search 0 search 1 search Multiple search
U 45.50 36.00 38.50 36.50 52.00 13.00
Z -.605 -1.286 -1.109 -1.246 -.142 -2.939
p .283 .105 .141 .112 .452 .001*
r
Effect size
          -.64
large

Note: *=significant at the .05 level (exact sig. 1-tailed). Effect sizes are reported only for significant results.
Table 11: Between-group comparisons for the combination of resources (Mann-Whitney)

 

Experimental group

Control group

 

0 search

1 search

Multiple search

0 search

1 search

Multiple search

T

19.50

23.00

24.00

18.00

6.00

3.00

Z

-1.530

-.889

-1.177

-.533

-1.680

-2.310

P

.067

.207

.133

.326

.055

.010*

r
Effect size

 

 

 

 

 

-.77
large

Note: *=significant at the .05 level (exact sig. 1-tailed). Effect sizes are reported only for significant results.
Table 12. Within comparisons for the combination of resources (Wilcoxon)

6. Conclusion

In order to investigate the competences that translators do not necessarily have and revisers need, we set up a pilot study to compare translators with revisers in their translation revision behaviour, and in particular in their use of tools (dictionaries, websites, etc.) during the revision process. The hypotheses presented above are that translators and revisers use the same tools, but use them differently. For reasons of feasibility (see Section 4.1 and Endnote iii), our 'translators' and 'revisers' were 21 students, assigned to either an experimental group (translation trainees taking a revision module, i.e. revision trainees) or a control group (translation trainees or language-related trainees not taking a revision module, i.e. translation or language trainees). In other words, the participants were not professional translators and revisers, which is a limitation of this study (see below). The key-logging program Inputlog was used to analyse the revision process and thus, the use of tools during the revision process.

Since there was no tool that was used by only one group of participants, we were able to confirm the first hypothesis: translators and revisers do indeed use the same tools. However, we also established that revisers use tools and resources more frequently and spend more time investigating queries than translators do. Finally, as far as the combination of tools is concerned, we could not conclusively state that revisers combine more resources (or tools) per problem than translators. Although these results reveal some trends in the use of tools, they will have to be confirmed by a broader study involving professional translators and revisers. The question is indeed whether professional translators versus revisers also use the same tools but differently. Translation and revision trainees do: after taking a revision module, trainees behave differently than before the module, but what about professionals in their daily practice? The study has to be reproduced on a larger scale and with professional participants.

Since this was a preliminary pilot study, there were some other limitations to the research, in addition to the participants' profile. First, the key-logging software registered only what happened in active windows on the screen, so we needed to take into account a margin of error for users who had multiple windows open on the screen and looked at different windows without activating them. However, forcing participants to work with only full-screen windows would compromise the ecological validity of the research. Second, our samples were relatively small, so we must be careful about generalising our findings to larger populations. This seems to be a recurrent problem in Translation Studies when students are recruited, for feasibility reasons, including financial considerations. Deontologically, students cannot be obliged to take part in experimental research, unless this is part of a module in research competence for example. Even then, in departments of translation where the number of students is limited and the number of language combinations rather high, which is the case in our department, finding participants on a voluntary basis remains a challenge.

Future research will be conducted to address these shortcomings. In addition, the research will be extended to include post-editing, with a view to comparing the competences required for translating, revising and post-editing (as three closely-related but different tasks). The final goal is to develop empirically tested competence models for translation revision and post-editing, based on the principles of the existing translation competence models5 .

Bibliography
  • Barabé, Donald (2013). “Société, technologie et traduction : perspectives et impacts.” JosTrans 19, 41-61.
  • Biel, Lucja (2011). “Training translators or translation service providers? EN 15038:2006 standard of translation services and its training implications.” JosTrans, 16, 61-76.
  • Brunette, Louise (2002). “Normes et censure : ne pas confondre.” TTR : traduction, terminologie, rédaction, 15(2), 223-233.
  • Brunette, Louise (2007). “Relecture-révision, compétences indispensables du traducteur spécialisé.” Elisabeth Lavault-Olléon (ed.) (2007). Traduction spécialisée: pratiques, théories, formations. Bern: Lang, 225-236.
  • Brunette, Louise and Chantal Gagnon (2013). “Enseigner la révision à l'ère des wikis : là où l'on trouve la technologie alors qu'on ne l'attendait plus.” JosTrans, 19, 96-121.
  • Declercq, Christophe (2014). “Crowd, cloud and automation in the translation education community.” Cultus, 7, 37-56.
  • EMT Expert Group (2009). “Competences for professional translators, experts in multilingual and multimedia communication.” https://goo.gl/cekhYK (consulted 03.03.2016)
  • European Committee for Standardization (2006). European Standard EN 15 038. Translation services - Service requirements. Brussels: European Committee for Standardization.
  • Field, Andy (2009). Discovering statistics using SPSS. London: SAGE Publications Ltd.
  • Garcia, Ignacio (2011). “Translating by post-editing: is it the way forward?” Machine Translation, 25, 217-237.
  • Göpferich, Susanne (2008). Translationsprozessforschung. Tübingen: Narr.
  • (2009). “Towards a model of translation competence and its acquisition: the longtitudinal study TransComp.” Susanne Göpferich, Arnt Lykke Jakobsen and Inger M. Mees (eds) (2009). Behind the mind. Methods, models and results in translation process research. Copenhagen: Samfundsliteratur, 11-37.
  • (2013). “Translation competence: explaining development and stagnation from a dynamic systems perspective.” Target: International Journal of Translation Studies, 25(1), 61-76.
  • Göpferich, Susanne and Riitta Jääskeläinen (2009). “Process research into the development of translation competence: Where are we, and where do we need to go?” Across Languages and Cultures, 10(2), 169-191.
  • Hansen, Gyde (2009). “The speck in your brother's eye - the beam in your own. Quality management in translation and revision.” Gyde Hansen, Andrew Chesterman and Heidrun Gerzymisch-Arbogast (eds) (2009). Efforts and models in interpreting and translation research: a tribute to Daniel Gile. Amsterdam: John Benjamins, 255-280.
  • Hernández-Morin, Katell (2009). “Pratiques et perceptions de la révision en France.” Traduire, 2(221), 58-78.
  • Horguelin, Paul. A. and Louise Brunette (1998). Pratique de la révision. Brossard, QC: Linguatech.
  • Hurtado Albir, Amparo (2015). “The acquisition of translation competence. Competences, tasks, and assessment in translator training.” Meta, 60(2), 256-280.
  • Hurtado Albir, Amparo (ed.) (2016). Researching translation competence by PACTE Group. Amsterdam: Benjamins.
  • International Organization for Standardization (2015). ISO 17100:2015 - Translation services -- Requirements for translation services. Geneva: ISO.
  • Kelly, Dorothy (2005). A handbook for translator trainers: a guide to reflective practice, Translation practices explained. Manchester: St. Jerome.
  • Künzli, Alexander (2005). “What principles guide translation revision? A combined product and process study.” Ian Kemble (Ed.) (2005). Translation norms: what is 'norma' in the translation profession? Proceedings of the conference held on 13th november 2004 in Portsmouth Portsmouth: University of Portsmouth, School of Languages and Area Studies, 31-43.
  • Künzli, Alexander (2006). “Teaching and learning translation revision: some suggestions based on evidence from a think-aloud protocol study.” Mike Garant (ed.) (2006). Current trends in translation teaching and learning. Helsinki: Helsinki University, 9-24.
  • Leijten, Mariëlle, Luuk Van Waes and Erik Van Horenbeeck (2015). “Analyzing writing process data : a linguistic perspective.” Georgeta Cislaru (ed.) (2015). Writing(s) at the crossroads : the process-product interface. Amsterdam: Benjamins, 277-302.
  • Leijten, Mariëlle and Luuk Van Waes (2013). “Keystroke logging in writing research: using Inputlog to analyze and visualize writing processes.” Written Communication, 30(3), 358-392.
  • Lesznyák, Márta (2008). Studies in the development of translation competence. PhD. thesis. Pécsi Tudományegyetem, Pécs.
  • Lisaité, Donana et al. (2016). “Negotiating meaning at a distance: peer review in electronic learning translation environments.” Marcel Thelen et al. (Eds.) (2016). Translation and Meaning New Series. Frankfurt am Main, Berlin, Bern, Bruxelles, New York, Oxford, Wien: Peter Lang, 99-113.
  • Mossop, Brian (1992). “Goals of a revision course.” Cay Dollerup and Anne Loddegaard (eds.) (1992). Teaching translation and interpreting: training, talent and experience: papers from the first Language International Conference Elsinore Denmark. Amsterdam: John Benjamins, 411-420.
  • Mossop, Brian (2001). Revising and editing for translators. Manchester: St. Jerome.
  • (2007). Revising and editing for translators (2nd edition) Manchester: St. Jerome.
  • (2014). Revising and editing for translators (3rd edition). New York: Routledge.
  • PACTE (2000). “Acquiring translation competence: hypotheses and methodological problems in a research project.” Alison Beeby, Doris Ensinger and Marisa Presas (eds) (2000). Investigating Translation. Amsterdam: John Benjamins, 99-106.
  • (2003). “Building a translation competence model.” Fabio Alves (ed.) (2003). Triangulating translation: perspectives in process oriented research. Amsterdam: John Benjamins, 43-66.
  • (2005). “Investigating translation competence: conceptual and methodological Issues.” Meta, 50(2), 609-619.
  • (2008). “First results of a translation competence experiment: 'Knowledge of translation' and 'Efficacy of the translation process'.” John Kearns (ed.) (2008). Translator and interpreter training. Issues, methods and debates London: Continuum, 104-126.
  • (2009). “Results of the validation of the PACTE translation competence model: acceptability and decision making.” Across Languages and Cultures, 10(2), 207-230.
  • (2011a). “Results of the validation of the PACTE translation competence model: translation problems and translation competence.” Cecilia Alvstad, Adelina Hild and Elisabet Tiselius (eds) (2011). Methods and strategies of process research: integrative approaches in Translation Studies. Amsterdam: John Benjamins, 317-341.
  • (2011b). “Results of the validation of the PACTE translation competence model: translation project and dynamic translation index.” Sharon O’Brien (ed.) (2011). Cognitive explorations of translation. London: Continuum, 30-53.
  • (2014). “First results of PACTE group's experimental research on translation competence acquisition: the acquisition of declarative knowledge of translation.” Ricardo Muñoz Martín (Ed.) (2014), Minding translation/Con la traducción en mente (Vol. Special Issue 1). San Vincente del Raspeig: Universitat d'Alacant, 85-115.
  • (2015). “Results of PACTE's experimental research on the acquisition of translation competence. The acquisition of declarative and procedural knowledge in translation. The dynamic translation index.” Translation Spaces, 4(1), 29-53.
  • Pym, Anthony (2013). “Translation skill-sets in a machine-translation age.” Meta, 58(3), 487-503.
  • Rigouts Terryn, Ayla et al.(Forthcoming). “Conceptualizing translation revision competence: a pilot study on the acquisition of the 'knowledge about revision' and 'strategic' subcompetences.” Across Languages and Cultures.
  • Robert, Isabelle S. (2008). “Translation revision procedures: an explorative study.” Translation and Its Others. Selected Papers of the CETRA Research Seminar in Translation Studies 2007. http://www.arts.kuleuven.be/cetra/papers/files/robert.pdf (consulted 10.04.2016)
  • (2012). La révision en traduction : les procédures de révision et leur impact sur le produit et le processus de révision. PhD Thesis. Antwerpen: University of Antwerp.
  • (2013). “Translation revision: does the revision procedure matter?” Magdalena Bartlomiejczyk at al. (eds) (2013). Treks and tracks in translation studies. Amsterdam: Benjamins, 87-102.
  • Robert, Isabelle, Aline Remael and Jimmy J. J. Ureel (2016). “Towards a model of translation revision competence.” The Interpreter and Translator Trainer, 10(2). doi: 10.1080/1750399X.2016.1198183
  • Robert, Isabelle and Luuk Van Waes (2014). “Selecting a translation revision procedure: do common sense and statistics agree?” Perspectives, 22(3), 304-320.
  • Robert, Isabelle S. and Louise Brunette (2016). “Should revision trainees think aloud while revising somebody else’s translation? Insights from an empirical study with professionals.” Meta, 61(2), 320-345.
  • Robert, Isabelle S. et al. (Forthcoming). “Conceptualizing Translation revision Competence: a pilot study on the fairness and tolerance attitudinal component.” Perspectives: Studies in Translatology.
  • Schäffner, Christina (2012). “Translation competence: training for the real world.” Séverine Hubscher-Davidson and Michal Borodo (Eds.) (2012). Global trends in translator and interpreter training. Mediation and culture. (pp. 30-44). London: Continuum, 30-44.
  • Schjoldager, Anne, Kirsten Wølch Rasmussen, and Christa Thomsen (2008). “Précis-writing, revision and editing: piloting the European Master in Translation.” Meta, 53(4), 798-813.
Biographies

Isabelle S. Robert and Jimmy Ureel (PhDs) are both lecturers and researchers at the department of Applied Linguistics/Translators & Interpreters at the University of Antwerp (Belgium) and members of the research group TricS (https://www.uantwerpen.be/en/rg/translation-interpreting/). Aline Remael (PhD) is Professor and Department Head at the same department. At the time of the ; tudy reported in this paper, Ayla Rigouts Terryn (MA) was researcher at the same department. She is now a researcher and PhD student in the LT³ research group at Ghent University's Department of Translation, Interpreting and Communication. They can be contacted at the respective addresses: isabelle.robert@uantwerpen.be; jimmy.ureel@uantwerpen.be; aline.remael@uantwerpen.be and ayla.rigoutsterryn@ugent.be.

Rigouts portraitRemael portraitUreel portraitRobert portrait



Acknowledgements

The research reported here was supported financially by the Bijzonder Onderzoeksfonds (Special Research Foundation) of the University of Antwerp (Belgium), through a one-year STIMPRO project (nb. 29809 with Isabelle Robert as applicant, Jim Ureel and Aline Remael as co-applicants, and Ayla Rigouts Terryn as researcher).

Endnotes

Note 1:
First results related to two other subcompetences of the model will be published soon (Rigouts Terryn et al. Forthcoming). Return to this point in the text

Note 2:
Other hypotheses were formulated for the same data set. See Rigouts Terryn et al (Forthcoming). In addition, another publication has been submitted and accepted: Robert et al (Forthcoming). Return to this point in the text

Note 3:
The study was conducted within the scope of a STIMPRO project of the University of Antwerp, Belgium. These projects are ‘stimulation’ projects for departments that have recently been integrated into the university, such as the Department of Applied Linguistics, Translation and Interpreting, where the research was conducted. Funding is limited to one researcher and a limited budget for experiments, which makes recruiting professionals and financially remunerating them almost impossible. Experience (Robert 2012) has shown that recruiting participants and organizing experiments at the revisers’ workplace is time-consuming and cannot be realized in such a short period. Return to this point in the text

Note 4:
In other words, the revision brief was different for the revision into the mother tongue one the one hand (Dutch), and revision into the foreign language on the other hand. Two different revision briefs were created to determine their impact on the revision procedure selection of the participants, which is considered to be part of the revision strategic subcompetence. For results, see Rigouts Terryn et al (Forthcoming). Return to this point in the text

Note 5:
We invite researchers to get in touch if they are carrying out similar research or would like to carry out parallel testing. Return to this point in the text