Error deduction and descriptors – A comparison of two methods of translation test assessment

Barry Turner, Miranda Lai, Neng Huang

Abstract


This paper examines two assessment methodologies used for large-scale translating and interpreting accreditation testing: error analysis / deduction and descriptors.  A report by the Royal Melbourne Institute of Technology (RMIT University) (Turner and Ozolins, 2007) showed that the UK Institute of Linguists and the American Translators Association are among international testing bodies that have moved or are moving towards using descriptors or combining negative marking and descriptors. This paper explores whether the Australian National Accreditation Authority for Translators and Interpreters (NAATI) might be able to move to a descriptor approach to assessment without risk to the reliability or accountability of its public examination system. The NAATI assessment system is used as a benchmark to compare it with assessment outcomes using the descriptor-based translation component of the U.K. Institute of Linguists Diploma of Public Service Interpreting (DPSI). The most significant finding of the research is that there was a high correlation between assessment outcomes in the two assessment systems, indicating that a descriptor system might be as reliable and accountable as the current NAATI system.

Keywords


translation; translation examination; negative deduction; error analysis; descriptor; marking; accreditation

Full Text: PDF