Effects of the intervention of the judges on the validity of the centers of evaluation (assessment centers)
Assessment centers appeared in France around twenty years ago. A simple definition is to consider the assessment centers as a method of assessment using various tools, which include systematic scenarios. In other words, it is a process whose central concepts are multi-evaluation and simulation. This method is used for various purposes, such as recruitment, internal selection for mobility, sensing potential, career guidance or the definition of training needs and development priorities.
As part of a selection, the goal is to provide an overall assessment of the candidate's ability to succeed in certain aspects of the position offered, or the functions that he would normally occupy. However, several factors, especially regarding content validity, are reported, as possibly causing a decline in the validity of the procedure, placing it at only the fourth position as the most valid occupational selection (Schmidt and Hunter, 1998).
Caldwell, Thornton and Gruys (2003) believe that the validity of the procedure is reduced due to errors in construction and the use of the CA, one of which refers to the intervention of judges. To assess the significance of these errors and respond to them, we will consider a fundamental aspect of CA: multiple assessors and their role. In this position, the question asked is: what are the factors relating to the intervention of judges, which may have an effect on the validity of the procedure?
Characterizing the judges returns to identification of populations from which they come. Evaluators are selected based on their managerial position, their knowledge and their level of technical expertise of the profession evaluated. The managers of more than two levels above the position of the position, for which the AC procedure is conducted, are generally not as effective as assessors' hierarchically closest candidates (Caldwell et al, 2003).
According to Lievens (2002), the strategies adopted by the judges to use the information provided during the evaluation have an impact on their effectiveness. Evaluators effective use less information than ineffective. Moreover, they seem to use the same sources of information at different times of assessment, while inefficient would not necessarily use the same sources. Also, the effectiveness of evaluators would be positively related to the consistency of their judgments.
They also have the experience to characterize the judges. The "experienced evaluators" would be considered the best judges because they have a considerable amount of experience in the evaluation. While "inexperienced evaluators" can certainly have some experience in the field, they are are not fully qualified. Differences in judgments between the two groups are comparable to those found by comparing the effective and ineffective judges . Indeed, the experiment seems to better discriminate the relevant and irrelevant.
According to Lievens (2002), it is that lack of experience suggests a greater need for information to compensate for the gaps. The training of assessors to support the AC test can improve the quality of the AC and this regardless of the countries studied (Krause and Thornton 2009). Lievens and Klimoski (2001) propose two models to understand how the judges and the evaluation process can affect the quality of the measure in AC.
Tags: centers of evaluation; assessment centers; validity of the centers; Intervention of the judges; effects of the intervention;