Reliable Evaluation Tools in Legal Interpreting: a test case

Heidi Salaets, Katalin Balogh

Abstract


In recent decades, test design, assessment and evaluation procedures have received much attention and have focused on concepts such as quality, validity and reliability. Obviously this is also true for the highly complex testing of interpreters’ skills, including legal interpreting. In this paper, we will first discuss the significant changes that have been made in the final examination procedure at the end of the LIT (Legal Interpreting and Translation) course at KULeuven, Antwerp campus, which have been complemented by an introductory workshop for the graders. It is important to mention that graders can be language experts as well as external legal experts (judges, prosecutors, police officers, lawyers, etc.) The comparison of the scores of candidates between 2008 and 2013 (a period in which different evaluation grids were used) shows a tendency towards more overall failures. In addition to this, an analysis of the graders’ comments demonstrates that results are more consistent and that graders’ comments mirror the results better. The new evaluation method clearly leaves less room for grader subjectivity, which presumably points to the fact that candidates are tested in a more transparent and reliable way. Follow-up research (in grader focus groups) and observations of the actual evaluation process will enable us to ensure that graders are comfortable with the new method and to check if they use it in a consistent way. Verifying whether the overall procedure actually produces better and more competent legal interpreters is a further important step needed to complete this research project.


Keywords


legal interpreting; evaluation procedure; reliability; legal expert/grader; language grader; evaluation grid

Full Text: PDF