Training providers or the relevant quality assurance institutions assess the training provided and the system’s performance during the evaluation phase, using data collected on the processes and outcomes of the training providers’ services. This phase ensures the effectiveness and relevance of the training provided and allows to identify areas for improvement. Criteria linked to the evaluation phase as well as indicators for which training providers should generally collect information are presented in this chapter.
5. Evaluation
Copy link to 5. EvaluationAbstract
Evaluation is the third phase of the EQAVET quality assurance cycle. The objective of this phase is to assess the training system and its providers using data collected on the processes and outcomes of training. These assessments help determine whether trainees have acquired the intended knowledge and skills and ensure the relevance of training. At the same time, this stage of the QA cycle serves to identify areas for improvement.
To support QA systems in achieving the objectives of the evaluation phase, EQAVET proposes the following indicative descriptors, which provide guidance on how to set up this phase:
EQAVET indicative descriptors at provider level:
Copy link to EQAVET indicative descriptors at provider level:Self-assessment/self-evaluation is periodically carried out under national and regional regulations/frameworks or at the initiative of VET providers, covering also the digital readiness and environmental sustainability of VET institutions.
Evaluation and review covers processes and results/outcomes of education and training including the assessment of learner satisfaction as well as staff performance and satisfaction.
Evaluation and review includes the collection and use of data, and adequate and effective mechanisms to involve internal and external stakeholders.
Early warning systems are implemented.
Despite the fact that national QA frameworks tend to use the EQAVET indicative descriptors less in the evaluation and review phases than in the planning and implementation ones (EQAVET, 2023[6]), criteria in the frameworks analysed for this review for the evaluation phase generally overlap with the EQAVET indicative descriptors (Table 5.1). These criteria refer to gathering input from different stakeholders for the training provider evaluation, collecting data and information on specific indicators, specifying an evaluation method (self-assessment, internal or external reviews) and consolidating and evaluating data.
Table 5.1. Quality criteria covered by the different QA systems
Copy link to Table 5.1. Quality criteria covered by the different QA systems|
Criteria |
EQAVET |
CHE EduQua |
SVN OQEA |
NLD NRTO |
AUT Ö-CERT (incl. QMS) |
IRL QQI |
FRA Qualiopi |
PRT DGERT |
ICE EQM |
LUX Label de Qualité |
ISO 21001/ 9001 |
|---|---|---|---|---|---|---|---|---|---|---|---|
|
Input by relevant actors |
x |
x |
x |
x |
x (QMS) |
x |
x |
x |
x |
x |
|
|
Data/info collected |
x |
x |
x |
x (QMS) |
x |
x |
|||||
|
Evaluation method |
x |
x |
x |
x |
x |
x |
x |
x |
x |
x |
|
|
Consolidate and evaluate data |
x |
x |
x |
x |
x |
Notes: This table presents the QA systems by their names and Ireland, the Netherlands and Portugal by the entity that is responsible for QA, as the frameworks they developed do not have a name. Since Ö-CERT is an “umbrella label”, providers must prove that they have one of the 12 accepted quality management systems (QMS) to be awarded the Ö-CERT quality label. For this reason, Ö-CERT has been analysed jointly with four of the accepted QMS: Cert NÖ, EduQua, ISO 21001 and ISO 9001. Quality areas and criteria that include (QMS) in the Ö-CERT column are covered by accepted quality labels and not by Ö-CERT directly. Each “x” in the table indicates that the given QA system includes indicators in the corresponding criterion.
Source: Author’s elaboration.
The basis for an effective evaluation of training programmes is the collection of data and information from all relevant actors involved in the training process. Most QA systems focus on gathering information from trainers and trainees, but other staff (e.g. guidance counsellors, administration, management), collaborators or employers, can also be consulted. Collecting information on both the performance of staff and trainees and their satisfaction with the training services can offer a good picture of the training provider performance. EQM and the NRTO quality label are among the systems that collect feedback from staff and involve them in the evaluation phase, for example, through participation in examination boards, QA committees or external evaluation teams. As part of their work, these staff may contribute to deciding how to assess learning outcomes, what quantitative and qualitative tools to use to analyse the relevant data, and how to compare training providers’ results to those of other providers. The NRTO quality label additionally collects information from trainees, which is also done by Qualiopi, QQI, the Label de Qualité and EduQua. QQI specifies that learner feedback should be collected periodically in order to obtain up to date information that can be used directly to design improvement actions. This indicator aims at reinforcing the training provider’s quality culture through encouraging a continuous improvement cycle, as mentioned in the EduQua and QQI frameworks. Finally, some of the systems also collect information on whether the employers of the training providers’ trainees are satisfied with the skills and knowledge acquired during the training programme, such as Qualiopi and QQI.
Many QA frameworks specify which indicators providers should collect information and data on for their evaluation. EQAVET proposes a list of indicators (see Table 5.2), some of which are used by many systems as well: the number of participants in a programme (e.g. Qualiopi), the number of people having successfully completed or abandoned a programme (e.g. OQEA, EduQua, CERT NÖ, Qualiopi, QQI), the destination of trainees after completion of training (e.g. Qualiopi, QQI), and the prevalence of vulnerable groups among trainees (e.g. CERT NÖ, OQEA, QQI). In addition, training providers may be required to measure the satisfaction of trainees, trainers and employers with the acquired skills/competences (e.g. Qualiopi, QQI) or to provide data on the performance of provider staff or of other collaborators (e.g. DGERT).
Other QA systems do not formulate specific indicators on how training providers should measure the quality of their services, expecting them to decide on how to provide this information. For example, EQM expects providers to assess the general success of their programmes, including the achievements of the learning outcomes by trainees, without specifying how this should be done. DGERT requires training providers to strengthen the position of their trainees in the labour market while leaving to the provider the decision of how to prove that this indicator has been achieved. Additionally, while trainees’ results are likely to vary depending on their demographic characteristics, CERT NÖ is the only system reviewed that expects training providers to also collect information on demographic characteristics of their trainees and staff, such as gender, age or social background, allowing them to explore differences in outcomes and processes across these dimensions.
Table 5.2. EQAVET indicators and coverage by the QA systems
Copy link to Table 5.2. EQAVET indicators and coverage by the QA systems|
EQAVET indicators |
CHE EduQua |
SVN OQEA |
NLD NRTO |
AUT Ö-CERT (incl. QMS) |
IRL QQI |
FRA Qualiopi |
PRT DGERT |
ICE EQM |
LUX Label de Qualité |
ISO 21001/ 9001 |
|---|---|---|---|---|---|---|---|---|---|---|
|
Relevance of QA systems for providers |
x |
x |
x |
x |
x |
x |
x |
x |
x |
n.a. |
|
Investment in training of trainers |
x |
x |
x |
x |
x |
x |
x |
n.a. |
||
|
Participation rate |
x |
x |
x |
x |
x |
x |
x |
x |
n.a. |
|
|
Completion rate |
x |
x |
x |
x (QMS) |
x |
x |
x |
x |
n.a. |
|
|
Placement rate/information |
x |
x |
x |
x |
n.a. |
|||||
|
Utilisation of acquired skills at the workplace |
x |
x |
x |
x (QMS) |
x |
x |
x |
x |
n.a. |
|
|
Unemployment rate |
x (QMS) |
x |
n.a. |
|||||||
|
Prevalence of vulnerable groups |
x |
x (QMS) |
x |
x |
x |
x |
n.a. |
|||
|
Mechanisms to identify training needs in the labour market |
x |
x |
x |
x (QMS) |
x |
x |
x |
x |
n.a. |
|
|
Schemes used to promote better access to training |
x |
x |
x (QMS) |
x |
n.a. |
Notes: This table presents the QA systems by their names and Ireland, the Netherlands and Portugal by the entity that is responsible for QA, as the frameworks they developed do not have a name. Since Ö-CERT is an “umbrella label”, providers must prove that they have one of the 12 accepted quality management systems (QMS) to be awarded the Ö-CERT quality label. For this reason, Ö-CERT has been analysed jointly with four of the accepted QMS: Cert NÖ, EduQua, ISO 21001 and ISO 9001. Quality indicators that include (QMS) in the Ö-CERT column are covered by accepted quality labels and not by Ö-CERT directly. Each “x” in the table indicates that the given QA system includes a corresponding indicator.
Source: Author’s elaboration.
Once the relevant data and information has been collected, QA frameworks state the evaluation method that will be used to analyse the data in the evaluation phase. This can be a self-assessment, an internal or external review, or a combination of these. As presented in Table 2.2, systems reviewed for this report generally use either a self-assessment (e.g. the NRTO quality label) or an external review (e.g. Qualiopi), with a possible combination of both instruments (e.g. EduQua and QQI). While using a self-assessment encourages the development of a quality culture within the training provider, it may lead to assessment results that are not comparable across training providers. On the other hand, external reviews can standardise assessments, making them comparable and ensuring a minimum level of quality, but may be less successful at developing a need for continuous improvement within training providers.
Finally, after all the relevant information and data is collected and the evaluation method is defined, training providers or the relevant QA institution must consolidate and evaluate the information. The objective of this step is to identify strengths and weaknesses of the services provided using the information and data gathered. This step is the precondition for drawing conclusions and suggesting improvement measures in the review phase, the fourth and last phase of the QA cycle (discussed in Chapter 6). Most systems specifically require an evaluation of whether planned objectives and results have been achieved (self-assessment or external evaluation). EQAVET additionally suggests carrying out continuous quality monitoring through the implementation of early warning systems. These systems inform providers or the relevant institutions about changes in certain indicators that could affect quality as soon as they occur, so that they can be addressed before they impact the quality of training services. However, such a system has not been included yet in any of the QA frameworks reviewed for this report.
Compliance with the different criteria and indicators of the evaluation phase can be proven using training provider documentation. Table 5.3 gives an overview of the evidence that QA systems require from providers to prove their compliance.
Table 5.3. Evidence used for the evaluation phase quality criteria
Copy link to Table 5.3. Evidence used for the evaluation phase quality criteria|
Quality criteria |
Evidence |
|---|---|
|
Stakeholder input |
|
|
Data/info collected |
|
|
Evaluation method |
|
|
Evaluate and consolidate data |
|
Source: Author’s elaboration.