This chapter assesses the quality assurance mechanisms introduced under Greece’s “Jobs Again” reform using the five-dimension assessment framework. It analyses DYPA’s approach to accrediting and monitoring training providers, with a particular attention to performance-based funding, key performance indicators (KPIs), and the new registry of eligible providers. The chapter highlights strengths, such as increased accountability, while also well as identifying risks related to labour market volatility, equity, and co-ordination challenges between DYPA and existing institutions. The analysis informs recommendations to strengthen the reform’s coherence, effectiveness, and alignment with long-term skills development goals.
4. Analysis of the “Jobs Again” reform
Copy link to 4. Analysis of the “Jobs Again” reformAbstract
This chapter provides an assessment of the “Jobs Again” reform, focusing specifically on its quality assurance mechanisms through the lens of the five-dimension framework introduced earlier. The analysis examines the measures that DYPA (Greece’s Public Employment Service) is set to implement to accredit and monitor training providers, with particular attention to the introduction of a registry of eligible providers and the use of key performance indicators (KPIs) to ensure accountability and quality.
The assessment offers a general evaluation of the reform’s approach using the five dimensions – covering baseline standards, learner experience, outcome measurement, performance-based monitoring, and co‑ordination/governance.
While the reform introduces broader changes to Greece’s continuous vocational education and training (CVET) system, this evaluation focuses solely on the quality assurance mechanisms that DYPA will implement, particularly those related to provider performance and regulatory oversight.
General assessment of the reform’s approach
Copy link to General assessment of the reform’s approachDimension 1: Ensuring Minimum Standards
The “Jobs Again” reform represents a fundamental shift in the governance of CVET by introducing performance-based accountability as a core principle. This shift is significant in at least two key ways:
Outcome-based evaluation: The reform moves away from traditional input-based measures (such as course duration or funding per participant) towards an approach that prioritises measurable outcomes, particularly labour market integration.
Provider accountability: The introduction of a registry of eligible providers, combined with strict accreditation and performance monitoring criteria, creates a structured framework to ensure that only high-quality training institutions participate in publicly funded CVET programmes.
These elements align with international trends in skills development, particularly efforts seen in OECD countries to enhance training effectiveness through data-driven decision-making and performance-based funding models. However, successful implementation will depend on the robustness of data collection and enforcement mechanisms.
Additionally, the reform introduces new quality requirements that affect the baseline standards for provider eligibility, including ISO 9001 (Quality Management) and ISO 27001 (Data Security) certifications. While ISO 9001’s emphasis on customer satisfaction aligns with general service quality principles, it may be less suitable for a vocational training context where the focus should be on broader external outcomes, such as labour market integration and social impact. Alternative standards such as ISO 21001 (Management Systems for Educational Organisations), ISO 29990 (Learning Services for Non-Formal Education and Training), and ISO 29993 (Learning Services Outside Formal Education) offer more tailored frameworks that focus specifically on educational quality, learner outcomes, and instructional effectiveness. A more detailed assessment is needed to determine which standard – or combination of standards – would best support the reform’s objectives in setting minimum requirements.
Dimension 2: Ensuring Student Participation in Quality Learning
A key issue of the current reform is that provider accreditation and evaluation rely primarily on post-training indicators, such as employment rates, job retention, and certification completion (see the following section for a detailed analysis). While these indicators are useful for assessing training effectiveness, they do not guarantee that the training content itself is relevant to current and future skill demands. The reform does not specify how course content will be validated against evolving labour market needs or whether providers must follow a structured framework, such as a national qualifications framework, to ensure consistency in skill classification, competency levels, and alignment with industry standards.
Furthermore, relying on EOPPEP’s accreditation of training providers places the responsibility for ensuring quality at the institutional level rather than at the course level. The introduction of individual learning accounts (ILAs) enhances learner choice, but without strong quality control measures for course content, there is no clear safeguard to ensure that the training offered remains relevant and responsive to shifts in labour market demand.
Finally, it is unclear how the reform ensures that training provision is aligned with labour market demands, as the mechanisms for validating training content are not sufficiently defined. The “Jobs Again” reform seeks to align training provision with labour market needs through collaboration with the National Skills Council and use of on sector-based skills forecasts under the updated Skills Strategy (2023). However, the reform does not establish a clear mechanism to ensure that these learning outcomes are consistently aligned to labour market demands. An in-depth analysis of the link between training and the labour market can be found in Output 2: Policy Analytical Proposal for Greece’s 2025 “Strategy for Labour Force Upskilling and Connection to the Labour Market”
Dimension 3: Measuring Training Outcomes and Long-term Impact
The “Jobs Again” reform places a strong emphasis on post-training outcomes as the primary measure of training effectiveness, relying on what it is known as Outcome-based Education (OBE) principles. However, it lacks a structured pre-training skills assessment, meaning that there is no clear baseline against which to compare post-training progress. This absence of a reference point creates significant challenges in evaluating the true impact of training on participants’ skills and employability.
One of the key issues is that training participants are not a homogenous group. For example, the training needs and learning progress of long-term unemployed individuals will differ substantially from those of high-skilled workers seeking upskilling opportunities. Without an initial skills assessment, it becomes difficult to distinguish whether post-training employment outcomes are the result of the training itself or merely a reflection of differences in participants’ pre-existing skill levels.
This lack of a baseline also complicates equity in provider evaluation. If performance-based funding and provider accreditation are determined primarily by employment rates and job retention metrics, training centres that work with more disadvantaged or lower-skilled participants may be unfairly penalised compared to those that train already highly employable workers. Without pre-training assessment data, it is difficult to adjust performance expectations based on the initial skills and labour market position of trainees.
Dimension 4: Performance-based Monitoring
The “Jobs Again” reform introduces a performance-based evaluation system for training providers, using key performance indicators (KPIs) to measure training effectiveness. These KPIs serve as benchmarks for determining whether training leads to successful employment outcomes, programme completion, participant satisfaction, and employer engagement. While the reform’s emphasis on measurable results is an important step toward improving accountability, each KPI presents unique strengths and limitations that must be carefully considered. This section provides a detailed analysis of each proposed indicator.
1. Employment Rate
The employment rate KPI requires that at least 50% of unemployed trainees secure employment within 12 months of completing the programme. This indicator is intended to measure the effectiveness of training in improving employability and is a central metric in evaluating provider performance. By assessing whether participants transition into jobs, it provides a direct measure of labour market integration.
One of the main strengths of this KPI is its clarity and ease of communication. Employment rates are widely understood by policymakers, employers, and the public, making it an accessible metric for assessing the success of vocational training programmes. Additionally, by focusing on employment outcomes rather than training inputs, this KPI encourages providers to align their programmes with real job opportunities, ensuring that the skills being taught are relevant to the labour market.
However, the employment rate KPI is highly dependent on external labour market conditions, making it an imprecise measure of provider quality and effectiveness. Economic downturns, regional disparities in job availability, and fluctuations in demand for skilled workers across different industries all influence employment outcomes independently of the quality of training provided. A high-quality training programme cannot create jobs where they do not exist, and providers working in high-unemployment regions may struggle to meet this target, even if they deliver high-quality instruction. For example, in areas where job opportunities are scarce, factors such as weak local industry, limited employer demand, or economic stagnation can prevent even well-trained graduates from securing employment. This places providers in disadvantaged regions at an inherent disadvantage compared to those in more economically dynamic areas, as their success rate in placing trainees depends as much on external job market conditions as on the effectiveness of their training programmes.
Another limitation of this KPI is its short-term focus. While securing employment within 12 months is an important milestone, it does not account for job quality, career stability, or skill utilisation. Participants may find temporary, low-wage, or part-time jobs that do not fully utilise their training, yet still count towards the KPI. This means that a programme could appear successful on paper despite its graduates being underemployed, working in roles that do not match their newly acquired skills or offer limited career advancement opportunities. For instance, in a tight labour market with widespread shortages, any training may appear effective based on this KPI alone, as employers are willing to hire quickly, even if the match between skills and job requirements is weak. Conversely, in a weaker labour market, even high-quality, demand-driven training may seem ineffective due to limited job opportunities. The risk in the latter scenario is that if this KPI is used to justify discontinuing programmes that fail to meet short-term employment targets, it could lead to long-term skill shortages once the labour market recovers.
Additionally, there is a risk of selection bias, where providers may prioritise trainees who are easier to place in jobs – such as individuals with higher existing qualifications or work experience – while neglecting more vulnerable groups, including long-term unemployed individuals, migrants, or workers with disabilities. If providers are primarily evaluated on employment outcomes, they may avoid serving those who face structural barriers to employment, thereby reinforcing rather than reducing existing inequalities.
2. Job Retention
The job retention KPI requires that at least 90% of trained workers must remain employed for at least 12 months after securing a job. This indicator aims to measure the sustainability of employment outcomes, ensuring that trainees are not just placed in jobs but are able to maintain employment over time.
A key advantage of this KPI is that it introduces a longer-term perspective on employment quality. Unlike the employment rate, which only considers whether a participant finds a job, job retention provides insight into whether that job is stable. This helps distinguish between short-term job placements and meaningful career opportunities, making it a stronger indicator of workforce integration.
However, despite its focus on employment sustainability, this KPI does not measure career progression or skill utilisation. A participant may remain employed for 12 months, but in a job that does not require the skills acquired through training. Without data on wage growth, promotions, or skill utilisation, the job retention KPI will not capture whether training has led to actual career advancement.
Furthermore, job retention is influenced by external factors beyond the provider’s control. Some industries experience high turnover rates due to seasonal employment, contractual limitations, or economic shifts. Even a well-trained worker may be laid off due to business restructuring or automation, making this KPI an unreliable measure of training quality in certain sectors. This is especially true in trades and industries with cyclical employment patterns, where frequent periods of unemployment are the norm rather than a sign of inadequate training. For example, in tourism-related sectors, where many jobs are concentrated in the peak summer season, followed by months of inactivity. This inherent seasonality not only distorts job retention metrics, making them less reliable, but also deters potential workers from pursuing careers in these fields. As a result, employers may face persistent labour shortages, exacerbated by the unattractive nature of these roles due to the downtime.
Like the employment rate, this KPI can disadvantage providers working with vulnerable groups. Participants with precarious work histories or gaps in employment may face higher job instability, making it more difficult for providers to meet retention targets. As a result, providers may focus on trainees who are already more likely to retain employment, further exacerbating selection bias.
3. Certification Rate
Once participants complete their training, their skills are certified by third-party certifiers. Certifiers are informed of a programme’s schedule once it begins, including its expected completion date. Participants can choose from one to two designated certifiers per programme to validate their skills. These certifiers operate under the supervision of the Ministry of Development, while the CVET providers are overseen by the National Organisation for the Certification of Qualifications and Vocational Guidance (EOPPEP), under the Ministry of Education. DYPA also conducts spontaneous audits to verify programme delivery and compliance with its requirements.
The certification rate KPI mandates that at least 80% of trainees who complete the programme must obtain certification. This indicator is based on the assumption that certification represents a meaningful measure of skills acquisition, providing formal recognition of training outcomes.
A major strength of this KPI is that it encourages programme completion and ensures that participants receive documented proof of their skills, which can be useful for job applications. It also promotes standardisation within the vocational training system by requiring providers to meet clear certification benchmarks.
However, certification does not necessarily guarantee employability. Many workers with certifications still struggle to find jobs if their newly acquired skills are not in demand. The labour market value of a certificate depends on whether employers recognise and trust the credential, as well as whether the skills being certified match industry needs.
Another issue is that certification pass rates are already between 90‑100% in many cases, meaning that this KPI may not differentiate high-quality training from ineffective programmes, raising concerns about the integrity of the certification process and the quality of outcomes. Without rigorous independent oversight, there is a risk that providers will prioritise easy-to-certify skills over more complex but valuable competencies, artificially inflating success rates while failing to improve real employability outcomes.
4. Participant Satisfaction
The participant satisfaction KPI requires that at least 75% of trainees evaluate the programme positively. This indicator seeks to capture learner feedback and provide insight into training quality from the perspective of participants.
An important advantage of this KPI is that it offers immediate feedback on training experiences, helping providers identify issues related to course structure, teaching quality, and learning environments. It also acknowledges the role of student engagement in effective learning, as motivated and satisfied participants are more likely to complete training successfully.
Despite these benefits, satisfaction surveys are highly subjective and do not necessarily correlate with training effectiveness or employment outcomes. A participant may rate a programme highly due to engaging instructors or user-friendly materials, even if the training does not lead to meaningful job opportunities.
Moreover, this KPI is vulnerable to manipulation, as providers may offer incentives for positive evaluations or make superficial improvements to training delivery without addressing core issues related to skills development or labour market alignment.
5. Employer Satisfaction
The employer satisfaction KPI requires that at least 75% of employers engaged in training must provide a positive evaluation. This indicator aims to ensure that training aligns with industry needs, promoting stronger collaboration between training providers and employers.
One of its main strengths is that it encourages employers to participate in shaping training curricula, making vocational programmes more responsive to labour market demands. Employers who are satisfied with the quality of training are more likely to hire programme graduates, increasing the chances of successful job placements.
However, this KPI has a limited scope, as not all training programmes involve direct employer participation. Many vocational courses provide general skills that may be applicable across multiple industries, making employer feedback less relevant in some cases.
Furthermore, the success of this KPI depends heavily on sufficient response rates from employers. If a significant portion of employers do not participate in the satisfaction survey, the resulting data may provide only a partial view of training effectiveness. Policymakers could consider introducing incentives or streamlined feedback processes-such as simplified online surveys, reminders, or recognition for companies with consistent feedback submission-to boost employer engagement.
Finally, employer satisfaction does not guarantee skill acquisition. A positive rating may be based on general impressions rather than rigorous assessments of trainees’ competencies. Without objective measures of workplace performance, employer satisfaction remains a subjective and inconsistent indicator of training effectiveness.
The strengths and weaknesses of these KPIs, which can be classified into four categories, are summarised in Table 4.1. These categories include Labour Impact KPIs, which measure employment outcomes; Programme Output KPIs, which assess certification and completion rates; User Experience KPIs, which reflect participant satisfaction with training; and Employer Outcome KPIs, which capture employer perspectives on training effectiveness.
Table 4.1. Overview of key performance indicators (KPIs) for provider evaluation
Copy link to Table 4.1. Overview of key performance indicators (KPIs) for provider evaluationClassification, strengths and weaknesses of proposed KPIs
|
KPI Category |
Indicator |
Target |
Strengths |
Weaknesses |
|---|---|---|---|---|
|
Labour Impact KPIs |
1. Employment Rate |
At least 50% of unemployed trainees must secure employment within 12 months of programme completion. |
|
|
|
2. Job Retention |
At least 90% of trained workers must remain employed for at least 12 months. |
|
|
|
|
Programme Output KPIs |
3. Certification Rate |
At least 80% of trainees who complete the programme must obtain certification. |
|
|
|
User Experience KPIs |
4. Participant Satisfaction |
At least 75% of trainees must evaluate the programme positively. |
|
|
|
Employer Outcome KPIs |
5. Employer Satisfaction |
For employer-linked training, at least 75% of employers must provide a positive evaluation. |
|
|
Challenges in implementing KPI metrics
The following challenges underscore some of the most pressing concerns arising from the reliance on KPI-driven accountability in the “Jobs Again” reform. Although the indicators themselves-covering employment outcomes, certification rates, and participant and employer satisfaction-can help enhance transparency and provider accountability, certain limitations have surfaced.
1. Over-Reliance on External Labour Market Factors
One of the most significant challenges is the over-reliance on external labour market factors in assessing provider performance. Employment and job retention KPIs depend on variables beyond the control of training providers, including regional disparities, sector-specific demand fluctuations, and broader economic conditions. For instance, providers in high-unemployment regions may face structural challenges that limit job opportunities, making it difficult for training providers to meet employment and retention targets despite offering high-quality training. Similarly, industries with cyclical or seasonal hiring patterns, such as tourism or agriculture, often experience fluctuating employment levels may struggle to meet these targets despite offering high-quality training. This creates a misalignment where funding and accreditation decisions are based on outcomes that providers cannot directly influence.
2. Selection Bias: Incentives to Prioritise Easier-to-Place Trainees
Another key issue is selection bias, which arises from the way performance thresholds are set. Since providers are evaluated based on employment and retention rates, there is an inherent incentive to prioritise trainees who are easier to place in jobs, such as those with higher educational backgrounds or recent work experience. This could lead to the exclusion of individuals most in need of training, such as long-term unemployed workers, migrants, or those with disabilities, thereby reinforcing existing inequalities rather than addressing them. If training providers focus on meeting KPI targets rather than on ensuring access for disadvantaged groups, the reform risks narrowing opportunities rather than expanding them.
3. Limitations of Satisfaction-based Metrics
The use of satisfaction-based metrics presents another challenge. While participant and employer satisfaction KPIs aim to capture feedback on training quality, they are inherently subjective and do not necessarily reflect actual skill acquisition or labour market relevance. A training programme may receive high satisfaction scores due to engaging instructors or a well-structured curriculum, yet fail to improve employment outcomes. Additionally, satisfaction surveys can be influenced by incentives, superficial improvements, or inconsistencies in data collection, making them an unreliable basis for provider evaluation. A more meaningful approach would involve structured pre- and post-training skill assessments to measure actual learning progression rather than relying solely on perception-based indicators.
4. Rigid Pass/Fail Criteria and Short-term Compliance Risks
A further concern is the rigid three-out-of-five failure threshold for removing providers from the registry, which presents a potential risk to training quality and labour market alignment. While intended to ensure provider accountability, the system’s reliance on numerical targets over qualitative assessments might lead to a focus on short-term compliance rather than meaningful improvements in training effectiveness. Providers may prioritise meeting statistical thresholds over investing in pedagogical innovation or addressing skill gaps in a sustainable way.
5. Failure to Account for Context-Specific Training Models
Additionally, the binary pass/fail nature of the criteria fails to consider context-specific factors. For instance, upskilling programmes for already employed individuals may focus on improving job performance, career progression, or adapting to new technologies, rather than leading to an immediate job change. The current framework does not adequately accommodate such cases, treating all training programmes under the same rigid structure, which may discourage providers from offering specialised upskilling or lifelong learning opportunities.
Moreover, progression to more advanced vocational programmes could also be recognised as a key indicator of success, particularly for participants with low initial skill levels. Many individuals lack the foundational skills necessary to enrol in job-ready vocational training programmes. In such cases, basic skills training-such as literacy, numeracy, or general education certificates-functions as a critical stepping stone toward employability. Rather than assessing these programmes solely on immediate employment outcomes, advancement to the next stage of education or training should be considered a measure of success.
Without recognising this progression metric, the current system risks undervaluing an essential component of workforce development and inadvertently discouraging investment in foundational skills training for the most disadvantaged learners.
6. Broader Concerns with the Outcome-based Education (OBE) Model
Finally, the broader debate surrounding Outcome-based Education (OBE) raises questions about the long-term impact of this approach. While OBE prioritises measurable employment outcomes, critics argue that this model can oversimplify learning, overlook critical thinking and adaptability, and lead to a narrow focus on test-based performance (Hussey and Smith, 2002[1]). The absence of strong empirical evidence on OBE’s effectiveness at scale further complicates its role in vocational training reform. While OBE can be valuable when integrated into a comprehensive quality assurance framework, its limitations must be carefully managed to avoid reducing training effectiveness to a set of rigid employment statistics.
Dimension 5: Co-ordination and Governance
A key structural change introduced by the “Jobs Again” reform is the addition of a new quality assurance layer under DYPA’s oversight. Currently, the initial screening of training providers is carried out by EOPPEP, under the Ministry of Education, which evaluates providers based on national accreditation standards. The reform, however, introduces additional quality requirements, including ISO 9001 (Quality Management) and ISO 27001 (Data Security) certifications, as well as performance-based criteria that providers must meet to remain in DYPA’s registry.
While these additional requirements serve DYPA’s objective of ensuring that training leads to employability, they also fragment the existing quality assurance system by creating a dual-layered accreditation structure. Providers must first meet the Ministry of Education’s accreditation criteria and then comply with DYPA’s supplementary standards, which include both upfront eligibility requirements and ongoing performance evaluations.
The existence of parallel quality control mechanisms – one under the Ministry of Education through EOPPEP and another introduced by DYPA – creates a dual system of oversight that poses significant governance challenges. These include risks related to co‑ordination, regulatory clarity, and institutional alignment. DYPA’s efforts to establish its own quality standards, including the implementation of ISO requirements, reflect a strong commitment to accountability. However, without effective co‑ordination with EOPPEP, this approach risks generating overlapping responsibilities, divergent benchmarks, and inconsistencies in quality assurance practices.
The duplication of accreditation criteria and quality standards not only increases administrative complexity but also creates uncertainty for training providers navigating the system. Providers have expressed strong concerns that managing two distinct QA frameworks would be burdensome and difficult, particularly for smaller institutions. This added complexity could discourage participation, ultimately reducing the diversity of training provision. To maintain coherence and ensure the success of the reform, it will be essential to align DYPA’s standards with the existing national framework and avoid fragmentation of the CVET quality assurance landscape.
Reference
[1] Hussey, T. and P. Smith (2002), “The Trouble with Learning Outcomes”, Active Learning in Higher Education, Vol. 3/3, pp. 220-233, https://doi.org/10.1177/1469787402003003003.