Despite their importance, quality assurance mechanisms in adult education and training (AET) have been somewhat overlooked in research. This chapter fills that gap by providing an overview of quality assurance mechanisms in 38 OECD member countries. It proposes a framework to characterise and compare the governance, processes, outcomes and benefits of these mechanisms. It facilitates the understanding of quality assurance across OECD countries by presenting a visual cross-country mapping of existing mechanisms across a number of dimensions.
Quality Matters

2. A comparative review of quality assurance mechanisms in adult education and training
Copy link to 2. A comparative review of quality assurance mechanisms in adult education and trainingAbstract
Introduction: Understanding quality assurance systems in adult education and training
Copy link to Introduction: Understanding quality assurance systems in adult education and trainingDespite the progress made by countries in establishing mechanisms to ensure the quality of adult education and training (AET) provision, many challenges remain and comparative international experience in this field is scarce. This chapter provides a comprehensive cross-country overview of quality assurance models across OECD countries. It primarily focuses on the provision of non-formal AET (hereafter simply referred as adult education and training or AET; see Box 1.1 in Chapter 1). It develops an analytical framework to characterise the variety of quality assurance mechanisms in AET, and develops indicators to synthetise, classify and compare these mechanisms along key dimensions.
Despite growing agreement among experts and policy makers that quality assurance plays a critical role in AET, this area still attracts less research attention, especially when compared with studies on quality assurance in sectors like higher education or vocational education and training (VET). This lack of attention may, in part, be due to the inherently more complex and heterogeneous nature of AET. In these latter sectors, quality assurance systems are well entrenched, and international standards, such as those from the European Association for Quality Assurance in Higher Education and the European Quality Assurance in Vocational Education and Training, have been established.
Building on previous OECD work (OECD, 2021[1]; OECD, 2021[2]), this chapter seeks to address this gap in the literature. Its contribution is to facilitate the understanding of quality assurance systems and practices across OECD countries. To this end, it presents a visual cross-country mapping that classifies the existing quality assurance models. These results enable a better understanding of how countries’ quality assurance systems compare with others internationally.
This chapter is structured as follows. The next section describes the methodology and sample followed by the resulting analytical framework. It then presents the findings from the assessment: the mechanisms used, role of governments, coverage, tools, outcomes and incentives involved in quality assessment in 38 OECD countries. It concludes with a discussion of the policy implications of adapting existing mechanisms to growing numbers of AET providers.
The rapidly changing landscape of AET poses several emerging challenges that are yet to be thoroughly explored. Future research in these areas is vital to the continuous adaptation of systems and practices, ensuring they are best equipped to respond to the dynamic needs of adult learners.
Background and methodology
Copy link to Background and methodologyMethodology
This section describes the methodology employed to construct the analytical framework and to derive the results. The methodology comprises three stages: i) desk research; ii) expert consultations; and iii) data collection and analysis.
Desk research
The project initiated with an exhaustive desk research phase, during which the OECD collected data and information on the present state of quality assurance in AET across OECD countries. This research encompassed a review of relevant literature, policy documents and proven practices in the field.
Leveraging previous OECD research on this subject (OECD, 2021[1]; OECD, 2021[2]), the OECD formulated a preliminary analytical framework. This framework, which aims to provide a thorough and systematic approach to studying quality assurance in AET, informed the development of a draft questionnaire. The questionnaire was then used to gather data from a sample of countries. This questionnaire sought information on the structure and functionality of quality assurance systems in AET, including details on the types of quality assurance mechanisms employed, the governance model and the results of the process.
Initially, the OECD collected data from a sample of 12 OECD countries to verify the validity of the framework and questionnaire. This sample was scrutinised and used to refine the analytical framework.
Expert consultations
The initial analytical framework and questionnaire underwent improvements based on significant input from quality assurance experts. During this phase, the OECD held consultations with these experts from the field of AET from Austria, Portugal, Slovenia, Switzerland and the United Kingdom (England).
The expert consultations encompassed a virtual workshop conducted in March 2022, along with numerous exchanges to provide feedback on the framework and questionnaire. The workshop aimed to gain a deeper understanding of the varied models of quality assurance in AET across these countries. It also sought to validate the findings from the desk research phase and solicit feedback.
The outcomes of these expert consultations were analysed and incorporated to refine the analytical framework for the study further and to guide the project's subsequent phase.
Data collection and analysis
The final framework, along with its associated questionnaire, was applied across all OECD member countries. The data were collected by the OECD from public sources.
Multiple information sources were employed during the data collection phase. These included government and institutional websites, legislation, reports, documents from relevant national organisations and agencies, and academic articles. This data were analysed and consolidated into a database. Individual country profiles highlighting each country's distinct features and characteristics were created and are available upon request.
Sample
The study encompassed all 38 OECD member countries. The specific unit of observation is one particular quality assurance mechanism. Here, a mechanism is defined as a distinct process, system or procedure established to evaluate the quality of AET providers. A quality assurance mechanism has a clearly identified set of criteria, standards, methods and instruments to systematically undertake the quality assessment. It is worth noting that the scope of assessment may encompass evaluating the overall quality of AET providers, examining the quality of specific educational and training programmes, or a combination of both aspects (see the analytical framework below).
It is important to note that multiple mechanisms for quality assurance may exist within a given country, often instituted by diverse entities, both public and private. These entities can range from individual ministries or agencies to multiple public bodies, each with a distinct, though potentially overlapping, focus or scope in the realm of AET quality assurance. Moreover, in federal nations, such as Canada and the United States, each province or state operates its own quality assurance system.
Nevertheless, this study has elected to focus on a single quality assurance mechanism within each OECD country. This decision was driven by several key factors: the availability of public data, the extent to which the quality assurance mechanism provides broad representation and coverage within the AET system, and a preference for those mechanisms that put a predominant focus on non-formal AET. This approach is not to discount the existence or importance of multiple mechanisms within some countries, but rather a methodological choice to streamline our analysis and provide more focused insights.
Table 2.1 below outlines the specific quality assurance mechanisms selected for analysis from each OECD country.
Table 2.1. Selected quality assurance mechanisms by country
Copy link to Table 2.1. Selected quality assurance mechanisms by country
# |
Country |
Quality assurance mechanisms |
---|---|---|
1 |
Australia |
Registration by the Australian Skills Quality Authority, ASQA |
2 |
Austria |
Ö-Cert |
3 |
Belgium (Flanders) |
Inspectorate of education |
4 |
Canada (British Columbia) |
Education Quality Assurance (EQA) |
5 |
Chile |
Register of Technical Training Bodies (Organismos Técnicos de Capacitación), OTEC |
6 |
Colombia |
Sistema de calidad de formación para el trabajo (SCAFT) |
7 |
Costa Rica |
Acreditación del Instituto Nacional de Aprendizaje (INA) |
8 |
Czechia |
Register of Schools and Educational Establishments |
9 |
Denmark |
Quality assurance and measurement with Viskvalitet.dk |
10 |
Estonia |
Notice of economic activities for the provision of continuing education |
11 |
Finland |
Vocational education and training quality awards |
12 |
France |
QUALIOPI |
13 |
Germany |
Accreditation and Certification in Employment Promotion Ordinance (Akkreditierungs-und Zulassungsverordnung Arbeitsförderung, AZAV) |
14 |
Greece |
Certification of the teaching qualification of Trainers for Adults of non-formal education by EOPPEP |
15 |
Hungary |
Licensing procedure for adult education providers |
16 |
Iceland |
EQM/EQM+ quality certification |
17 |
Ireland |
QQI Award provider |
18 |
Israel |
Teacher approval in supervised courses |
19 |
Italy |
Self-assessment for provincial centres for adult education |
20 |
Japan |
Quality certification by the Japan Association for Management of Training and Education (JAMOTE) |
21 |
Korea |
Accreditation by the Korean Skills Quality Authority (KSQA) |
22 |
Latvia (Riga) |
Licensing of non-formal adult education programmes |
23 |
Lithuania |
Law on Non-formal Adult Education and Lifelong Learning of the Republic of Lithuania |
24 |
Luxembourg |
Ministerial quality label |
25 |
Mexico |
National Registry of Training Courses Based on Competency Standards (RENAC) |
26 |
Netherlands |
NRTO Quality Mark |
27 |
New Zealand |
NZQA' External Evaluation and Review (EER) for tertiary education organisations |
28 |
Norway |
Kompetanse Norge (Skills Norway) |
29 |
Poland |
Accreditation of lifelong learning in out-of-school forms |
30 |
Portugal |
Certification by Direção-Geral do Emprego e das Relações de Trabalho, DGERT |
31 |
Slovak Republic |
Accreditation of further education programmes under Act No. 568/2009 Coll. on Lifelong Learning |
32 |
Slovenia |
Offering Quality Education to Adults (OQEA) |
33 |
Spain |
Questionnaire for the evaluation of the quality of training actions for the employment system |
34 |
Sweden |
The Bedömning, Reflektion, Utveckling, Kvalitet (Assessment, Reflection, Development, Quality) initiative (BRUK) |
35 |
Switzerland |
eduQua |
36 |
Türkiye |
External evaluation by the Board of Education Inspectors |
37 |
United Kingdom |
Inspections of further education and skills providers |
38 |
United States (Florida) |
Quality Assurance and Compliance (QAC) System |
Note: This is not a comprehensive list of all quality assurance mechanisms. For the purpose of this study, only one mechanism per country was selected.
Analytical framework
Copy link to Analytical frameworkThis section presents the analytical framework developed to characterise and compare quality assurance mechanisms across countries, by identifying the key components that define the governance, processes, and outcomes of these quality assurance mechanisms.
The framework is structured into three hierarchical tiers:
Macro-dimensions: the framework is built around four macro-dimensions: i) key features; ii) assessment processes; iii) outcomes; and iv) benefits to providers.
Dimensions: each macro-dimension is subdivided into more detailed dimensions.
Indicators: qualitative and quantitative indicators are used to operationalise each dimension and serve as specific measures that enable a comprehensive examination of each dimension.
Figure 2.1. Analytical framework: Macro-dimensions, dimensions and indicators offers an overview of the analytical framework and the macro-dimensions, dimensions and indicators it encompasses. Subsequent sections delve into each component of the framework, thoroughly examining their nature and scope.
Figure 2.1. Analytical framework: Macro-dimensions, dimensions and indicators
Copy link to Figure 2.1. Analytical framework: Macro-dimensions, dimensions and indicators
Key features
The first macro-dimension in the framework focuses on the key features of a quality assurance mechanism, which represent its most significant characteristics. These features consist of system-level attributes that enable differentiation between different quality assurance mechanisms.
This macro-dimension is defined by five dimensions: i) type of quality assurance mechanism; ii) responsible body; iii) service delivery provider; iv) scope of assessment; and v) cost. These five dimensions provide a comprehensive understanding of the mechanisms, covering their nature, governance, scope of assessment and fundamental operating features.
Type of quality assurance mechanism
The type of quality assurance mechanism dimension refers to how these mechanisms are categorised based on their inherent objectives, methods and outcomes. OECD countries utilise a range of mechanisms, serving different purposes and employing distinct approaches. It is important to note that countries may employ multiple mechanisms rather than exclusively relying on a single one. After a thorough examination, these quality assurance mechanisms were classified into four types: i) certification; ii) quality award; iii) quality inspection; and iv) self-assessment, as described in Table 2.2.
Table 2.2. Types of quality assurance mechanisms
Copy link to Table 2.2. Types of quality assurance mechanisms
Certification |
|||||
---|---|---|---|---|---|
Definition |
Certification is a formal recognition that an AET provider meets specific quality standards. Certifications are conducted by a third party (i.e. separate from the entity seeking certification), whether governmental or non-governmental organisations. A common outcome of certifications is the acquisition of a quality label, which may take the form of a logo, emblem or symbol that the provider is either able or, in some cases, obliged to showcase in reports and certificates. The attainment of quality certifications could involve on-site and off-site assessment procedures, such as audits and interviews. |
||||
Objective |
Who conducts the assessment? |
Common assessment procedures |
Possible outcomes |
||
Features |
Certify that the services and operations of AET providers or programmes meet specific quality standards. |
Third party |
Site visits Audit Interviews |
Written report Quality label Certificate |
|
Quality award |
|||||
Definition |
A quality award is a recognition granted to an AET provider or programme as part of a competition organised by an external entity, whether governmental or non-governmental organisations. In these competitions, various AET providers or programmes submit applications or are nominated to be considered for the award. These applications or nominations are then evaluated based on predefined criteria that measure the quality of their services. After the evaluation, the winners are selected, and the award results are announced. The winning AET providers or programmes are acknowledged for their outstanding performance and quality of service. This quality assurance mechanism can assess different elements of the quality of AET providers or programmes, including continuous improvement, and exemplary work in the development of AET. Quality awards are typically performed through expert reviews, whether on-site or off-premises, and normally result in a prize, distinction or other type of award. |
||||
Objective |
Who conducts the assessment? |
Common assessment procedures |
Possible outcomes |
||
Features |
Recognise the quality of AET providers or programmes |
Third party |
Expert review |
Prizes Distinctions Other type of awards |
|
Quality inspection |
|||||
Definition |
Quality inspection is a systematic evaluation of various elements of the AET provider's operations, curriculum, teaching methodologies, facilities and overall performance, conducted by a third party. The purpose of quality inspections is to determine conformity with predetermined requirements and standards and identify areas for improvement. Inspections typically involve site visits, reviews of performance indicators, and interviews with trainers, administrators and other relevant stakeholders. |
||||
Objective |
Who conducts the assessment? |
Common assessment procedures |
Possible outcomes |
||
Features |
Determine conformity with the standards and requirements set by the relevant authority and identify areas for improvement. |
Third party |
Site visits Analysis of performance indicators Interviews |
Written report |
|
Self assessment |
|||||
Definition |
Self-assessment is the process in which an AET provider internally evaluates the quality and performance of its own services and operations against established standards and criteria. Self-assessment is normally carried out through self-evaluations and analysis of performance indicators. The purpose of self-assessment is to promote self-reflection, continuous improvement and accountability within the AET provider. A common outcome of self-assessments is a written report, which serves as a formal record and guides the AET provider in identifying areas for improvement and developing action plans. |
||||
Objective |
Who conducts the assessment? |
Common assessment procedures |
Possible outcomes |
||
Features |
Promote continuous improvement and accountability within an AET provider through self-evaluation. |
Internal |
Self-evaluations Analysis of performance indicators |
Written report |
Responsible body
The second dimension refers to the entity or organisation that is responsible for overseeing and managing the quality assurance mechanism. The responsible body is typically a technical entity that is also responsible for setting standards and conducting the evaluations and assessment of providers. However, its specific responsibilities and functions may vary from country to country.
The responsible bodies are categorised into three main types: i) government agencies; ii) ministerial departments; and iii) non-governmental organisations (NGOs), defined in Table 2.3.
Table 2.3. Types of responsible bodies
Copy link to Table 2.3. Types of responsible bodies
Government agency |
---|
Government agency is a technical entity that is established by a government to perform specific functions. These agencies are part of the national, state, or local government, but are not directly controlled by a ministry or department. Agencies normally have a significant degree of autonomy when compared with ministries or ministerial departments, although their level of autonomy can vary significantly across countries. To enhance comparability in this study, any governmental entity that operates independently from a ministry or department, even if it is not formally designated as an agency, is categorised as a government agency. |
Ministerial department |
Ministerial department is a specialised department or unit established to perform specific functions, usually under the leadership and oversight of one or more cabinet ministers. |
Non-governmental organisation (NGO) |
Non-governmental organisations (NGOs) are organisations that operate independently from governments. NGOs typically are non-profit entities and have a specific mission (e.g. social, education). |
Service delivery provider
The service delivery provider dimension describes how the responsible body delivers its services. Specifically, it refers to whether the responsible body delegates certain tasks and assessment responsibilities to third parties (i.e. outsourcing) or if the assessment is primarily conducted in-house by the responsible body’s own staff and resources. Therefore, this dimension indicates whether the services are provided in-house or outsourced.
The outsourcing model is common in the accreditation of higher education institutions. In several countries, the accreditation of universities or programmes is carried out by private agencies (i.e. accreditation bodies) authorised by the national or local authority. Thus, the responsible body plays a co‑ordination role and set the rules and regulations under which the agencies assess the quality of institutions. In contrast, when service delivery is in-house, the organisation (e.g. the responsible ministry or agency) maintains control over the assessment processes instead of outsourcing them to external providers.
Scope of assessment
The scope of assessment dimension refers to the breadth or coverage of the quality assurance mechanism. This scope may involve evaluating the overall quality of AET providers, or examining the quality of specific educational and training programmes, or a combination of both aspects.
When assessing the overall quality of AET providers, the scope of assessment of the quality assurance mechanism extends to the evaluation of the provider as a whole and the mechanism will examine the institutional-level elements influencing the quality of the programmes offered by the provider (e.g. infrastructure, leadership, quality management systems in place). Alternatively, the scope of assessment may be narrower, focusing on specific educational programmes within AET providers. In this case, the quality assurance mechanism will typically place greater emphasis on evaluating elements such as specific course content, instructors’ qualifications, teaching resources and learning outcomes, and the overall effectiveness of those programmes.
Cost
This dimension reflects the immediate financial cost borne by providers to partake in the quality assurance process. The charges are typically settled in advance, paid at the start of the process to the responsible entity overseeing the procedure. To facilitate comparisons, fees have been converted from local currencies to euros based on the 2023 exchange rate.
Assessment process
The second macro-dimension refers to how the quality assessment is carried out in practice. Specifically, it covers two important dimensions of the assessment process: i) the elements of the training that are assessed; and ii) the assessment tools and methods used.
Quality-assessed training elements
This dimension refers to the specific domains or elements that are evaluated and assessed during the quality assurance process. These training elements encompass important components of the provision that contribute to the overall learning experience and ultimately to the quality of the programmes.
It is important to note that the quality assurance mechanisms cover a large number of elements. However, these elements can be synthesised and aggregated into nine broader categories to simplify the analysis and facilitate comparison between mechanisms. This framework identifies the following quality areas: i) external quality certificate; ii) leadership and management; iii) ongoing monitoring; iv) organisational structure; v) public information; vi) quality management system; vii) regulatory compliance; viii) staff training; and ix) training design and delivery. Each of these elements are defined in Table 2.4.
Table 2.4. Quality-assessed training elements
Copy link to Table 2.4. Quality-assessed training elements
External quality certificate |
---|
AET providers hold a quality certificate issued by an accredited external entity, which can be a governmental or non-governmental organisation. Examples of such certificates include ISO29990 or ISO9001. |
Leadership and management |
AET providers ensure that leadership enables effective management and operation of the services. Well-documented policies and procedures, well-maintained records, shared values and clear direction are in place to enable a customer-oriented and efficient service |
Ongoing monitoring |
AET providers monitor and periodically review their programmes to ensure that they achieve the objectives set out for them and respond to the needs of learners and society. AET providers ensure that they collect, analyse and use relevant information for monitoring purposes of their programmes and other activities. This information should lead to continuous improvement of the AET provider and programmes. |
Organisational structure |
AET providers ensure that the internal structure and organisation of the provider (human resources, facilities and equipment) are appropriate for the operation of the service. Staffing arrangements support learners’ continuing education and training. Outdoor and indoor spaces, buildings and fixtures are suitable for their purpose. Premises, furniture and equipment are safe and well maintained. |
Public information |
AET providers publish information about their activities, including programmes, which is clear, accurate, objective, up-to-date and readily accessible. |
Quality management system |
AET providers have a quality management system (QMS) in place, meaning that they have implemented a systematic approach to ensure consistent quality in its educational programmes and training services. The QMS involves documented processes, procedures and policies tailored to the specific needs of adult learners. |
Regulatory compliance |
AET providers adhere to laws and regulations relevant to their operations, set forth by local, regional or national governments. The specific requirements can vary, depending largely on the type of providers and programmes delivered. This quality area also includes adherence to ethical principles. |
Staff recruitment and training |
AET providers ensure the quality of the methodological-didactical competences of their teachers and trainers. AET providers apply fair and transparent processes for the recruitment and professional development of their staff. |
Training design and delivery |
AET providers ensure that the training design and delivery is stimulating and engaging and enhances learners’ continuing education and training. The development process of training programmes (planning, design, organisation, development, and training assessment) is appropriate for the operation of the service. AET providers ensure that programmes meet students’ educational needs, and that student support and academic assistance is provided. |
Assessment tools and methods
This dimension refers to the set of instruments and tools employed to gather the information and data to conduct the quality assessment. Six assessment tools and methods were identified: i) analysis of performance indicators; ii) expert reviews; iii) interviews; iv) self-assessment questionnaires; v) site visits; and vi) surveys, which are defined in Table 2.5.
Table 2.5. Assessment tools and methods
Copy link to Table 2.5. Assessment tools and methods
Analysis of performance indicators |
---|
Refers to the process of evaluation or measuring the effectiveness of the service delivery. Common performance indicators include enrolment, graduation rates, satisfaction of leaners and learning outcomes (e.g. grades). |
Expert reviews |
Another means to collect and analyse information is through expert reviews. For example, the responsible body can rely on experts (e.g. academics, experienced practitioners, employer representatives, peers) to conduct a quality assessment that is used as an input for the overall process. |
Interviews |
Interviews can be used to collect qualitative information and first-hand information and insights from key stakeholders such as trainers, learners or administrators. Depending on the type of information to be collected, interviews can be structured or semi-structured. |
Self-assessment questionnaires |
As part of some quality assurance processes, providers are requested by the body conducting the assessment to complete a self-assessment questionnaire. Self-assessment questionnaires allow AET providers to assess their own quality by critically thinking about their performance. Most commonly, the main objective of this instrument to identify strengths and weaknesses in order to develop a plan for continuous improvement. |
Site visits |
Site visits are physical inspections of an AET provider that aim to gather first-hand information about the operations and the quality of the educational services delivery. Site visits are typically performed by an assessment team from an independent third party (e.g. staff from the responsible body). During a site visit, the assessment team observe the functioning and operations of the provider, collect data and relevant information and typically conduct interviews and surveys with trainers, trainees and relevant stakeholders. |
Surveys |
Surveys of key stakeholders such as trainers, learners or administrators can be an important tool to gather information about specific aspects of the quality of AET providers. The advantage of surveys over interviews is that the former can reach a greater number of individuals. |
It is important to highlight that the instruments described in Table 2.5 are not mutually exclusive, which means the use of one does not preclude the use of others. In fact, most quality assurance mechanisms employ a combination of these instruments. For instance, expert reviews or site visits can be coupled with interviews and surveys. In a similar vein, the scrutiny of performance indicators can form a key element of both self-evaluation and expert review processes.
Assessment outcomes
This macro-dimension refers to the outcomes of the quality assurance mechanism, which encompasses the grading system – how the results of a quality assurance process are presented (for example a grade or pass/fail) – and how long the results remain valid.
Grading system
This dimension refers to the grading system used to report the results of quality assurance processes. Two categories were identified: i) multi-category grading systems; and ii) pass-fail grading systems.
A multi-category grading system typically assigns multiple categories (e.g. a score or letter grade) based on the performance of the AET providers or programmes in the process. In contrast, a pass-fail grading system is a binary system that assigns either a pass or a fail.
It is worth noting that one advantage of using a multi-category grading system is that it provides a more nuanced and detailed evaluation of the provider's overall performance. Alternatively, a pass-fail system focuses on whether the provider meets the established criteria and standards or not. It simplifies the evaluation process, categorising providers as either meeting the requirements (pass) or falling short (fail).
Validity period of the outcome
This dimension of the framework refers to the how long the outcome of the quality assurance mechanism is considered valid or relevant, measured in years. The validity period of the outcome is set by the responsible body or by external regulations, and it may vary depending on the type of quality assurance mechanism and the context in which it is used. The possible categories for this indicator are: i) fixed-term; and ii) open-ended.
Benefits to providers
The last macro-dimension of the framework refers to the advantages that AET institutions or providers might enjoy as a result of participating in a quality assurance process. It is important to note that these potential benefits are not mutually exclusive. The potential benefits to providers are classified into five categories: i) display of quality label; ii) eligibility for public funding; iii) licence to operate; iv) listing in a registry of providers; and v) awards, which are defined as follows in Table 2.6.
Table 2.6. Benefits to providers
Copy link to Table 2.6. Benefits to providers
Display of quality label |
---|
Quality labels indicate that a AET provider has attained specific quality standards. These labels can be an important tool to increase the credibility and reputation of the provider. Labels serve as a tangible demonstration of the provider’s commitment to quality, instilling confidence and satisfaction in learners and trust in the institution. Labels can (and in some cases must) be displayed publicly, for example on the institution’s website, brochures, and marketing products. Even though the process through which quality labels and quality awards are obtained differ substantially, quality awards are similar to quality labels with respect to the implications on the AET provider credibility, visibility and reputation. |
Eligibility for public funding |
Undergoing and passing a quality assurance process may also be a requirement for accessing public funding. For example, successful AET providers may be eligible to receive direct subsidies (i.e. supply-sede subsidies). Alternatively, learners enrolled in a AET that passes a quality assurance process may be eligible for state financial support (i.e. demand-side subsidies), such as scholarships or state-backed loans. |
Licence to operate |
The quality assurance process can also grant AET provider the legal permit to carry out its operations according to specified conditions and regulations. The licence to operate is usually subject to regular monitoring to ensure that the standards are maintained over time. |
Listing in a registry of providers |
A positive outcome in a quality assurance can also grant the AET institution to be in the database of institutions authorized to offer AET services. Registries typically list basic information about the providers and provides a central repository of information about education providers, which makes it easier for prospective student to find and compare AET providers |
Award |
Corresponds to the prizes, distinctions or other awards that are granted to those participating in certain quality assurance mechanisms. |
Findings
Copy link to FindingsFigure 2.2. Overview of selected quality assurance mechanisms in OECD countries
Copy link to Figure 2.2. Overview of selected quality assurance mechanisms in OECD countries% of mechanisms in the sample

Figure 2.2 offers an overview of the findings, which are derived from applying the analytical framework to examine the features of the selected quality assurance mechanisms in OECD countries. The figure shows the proportion of mechanisms within each category, or the proportion of mechanisms where a specific indicator is observed. By examining one quality assurance mechanism from each OECD member country and employing a systematic approach to analyse the collected data, these findings offer valuable insights into the variety of quality assurance systems across OECD countries.
Quality assurance mechanisms used
The majority of mechanisms (24 out of 38, approximately 63%) employ certification as their quality assurance mechanism for AET, including those in countries such as Colombia, Estonia, Israel, Japan, Korea and Luxembourg (Figure 2.3).
Quality inspection is the second most common quality assurance mechanism, observed in seven mechanisms (around 18%), such as Belgium (Flanders), Norway and the United Kingdom (England). Self‑assessment is adopted by six mechanisms (around 16%), including those examined in Italy, Lithuania and Slovenia.
In our sample, only Finland uses a quality award mechanism. The Ministry of Education and Culture organises an annual quality award competition, designed to incentivise providers to assess and enhance the quality of their activities. This recognition allows awarded training providers to showcase their commitment to excellence by displaying the ministry’s quality award badge in their communications. Additionally, the prize amount received serves as a means to further develop and enhance the activities of the training provider.
Figure 2.3. Mapping quality assurance mechanisms and responsible body by country
Copy link to Figure 2.3. Mapping quality assurance mechanisms and responsible body by country% of mechanisms in the sample

Figure 2.3 provides a visual representation of the type of quality assurance mechanism used by each country (certification, quality awards, quality inspection or self-assessment), as well as the type of body responsible for its implementation (government agency, ministerial department or NGO).
Similarly, as shown in Figure 2.2, the scope of the assessments also varies. While 37% of mechanisms (17 in total) focus solely on assessing institutions, as in Lithuania and the Netherlands, 32% of mechanisms (12 in total) restrict their assessments to specific programmes (as is the case in Latvia and Portugal). The same percentage (32%, or 12 mechanisms) use quality assurance mechanisms that evaluate both institutions and specific programmes (as is the case for Denmark, Germany and Korea).
Certification mechanisms are more likely to focus on programme-level assessments, with 14 out of 24 mechanisms (around 58%) adopting this approach. Self-assessment and quality inspections tend to be used for both institutional and programme-level assessments.
When it comes to costs, there is less information readily available compared to other components of the framework. There are no publicly available data on fees for 18 mechanisms (around 47%), 7 mechanisms (around 18%) offer free quality assurance processes, and the remaining 13 (around 34%) have paid quality assurance processes. For instance, among the mechanisms examined, those in countries like Chile, Hungary and New Zealand offer free quality assurance mechanisms, while France and the Slovak Republic have fee-based quality assurance mechanisms.
The data collected on fees charged for quality assurance across the sample reveals significant variation in pricing structures and information availability. Some mechanisms have specified fees, such as Austria, where there is a fixed fee of EUR 100. In other mechanisms, including Iceland and Japan, the price varies depending on the type of assessment (e.g. Iceland), or the entity chosen to conduct the assessment (e.g. Germany). Notably, Ireland stands out with a fee of EUR 5 000.
The role of governments in ensuring quality
Governments play a key role in ensuring quality standards of AET in our sample of mechanisms, as evidenced by their involvement in a large number of the quality assurance mechanisms examined. As shown in Figure 2.2 and Figure 2.3), roughly 50% of the mechanisms have a government agency as the responsible body for overseeing quality assurance mechanisms, including in Australia, Greece, Iceland, Ireland, Korea and Mexico. Ministerial departments are responsible in 42% of the mechanisms (16), such as in Canada (British Columbia), Czech Republic (hereafter Czechia), Denmark, Estonia, Israel and Latvia (Riga).
Non-government organisations (NGOs) oversee quality assurance in four mechanisms (10%), namely those examined in Japan, the Netherlands, Slovenia and Switzerland. These organisations serve as central co-ordinating bodies. For example, in Slovenia, the Slovenian Institute of Adult Education fulfils this role, while in the Netherlands, the Dutch Council for Training and Education is the co-ordinating association for all private training and education providers and is the responsible institution for issuing the quality marks. Similarly, the Swiss Federation for Adult Learning is a non-governmental umbrella organisation representing both public and private institutions, associations and personnel managers, and is responsible for managing the quality label EduQua.
In most mechanisms (32 mechanisms or 84%), the responsible body conducts assessments in-house. Examples of mechanisms with an in-house approach include those examined in Belgium (Flanders), Costa Rica, Estonia, Finland, Greece and Israel. Six mechanisms (16%) outsource the service delivery, where tasks and assessment responsibilities are delegated to third parties. This is the case in Austria, Chile, Colombia, France, Germany and Japan.
The coverage of quality assurance mechanisms
The previous section showed the diverse array of quality assurance mechanisms used in adult education and training. This section focuses on which elements of the training (as listed in Table 2.4) these mechanisms assess. This analysis, summarised in Table 2.7 and Figure 2.4, reveals a varied landscape, with different mechanisms assessing different combinations of training elements. It is worth noting that information was not available for all mechanisms. As a result, the percentages presented in this section and those that follow are based solely on the mechanisms for which information is available.
Table 2.7. Coverage of selected quality assurance mechanisms in OECD countries
Copy link to Table 2.7. Coverage of selected quality assurance mechanisms in OECD countries
QUALITY ELEMENTS COVERED BY DIFFERENT QUALITY ASSURANCE MECHANISMS |
|||||||||
---|---|---|---|---|---|---|---|---|---|
OECD countries |
External quality certificate |
Leadership and management |
Ongoing monitoring |
Organisational structure |
Public information |
Quality management system |
Regulatory compliance |
Staff recruitment and training |
Training design and delivery |
Australia |
● |
● |
● |
● |
● |
● |
● |
● |
|
Austria |
● |
● |
● |
● |
● |
||||
Belgium (Flanders) |
● |
● |
● |
● |
● |
||||
Canada (British Columbia) |
● |
● |
● |
● |
|||||
Chile |
● |
● |
● |
● |
● |
● |
|||
Colombia |
● |
● |
● |
● |
● |
||||
Costa Rica |
● |
● |
● |
● |
|||||
Czechia |
- |
- |
- |
- |
- |
- |
- |
- |
- |
Denmark |
● |
● |
|||||||
England (UK) |
● |
● |
● |
||||||
Estonia |
● |
● |
● |
● |
● |
||||
Finland |
● |
● |
● |
● |
|||||
France |
● |
● |
● |
● |
● |
● |
|||
Germany |
● |
● |
● |
● |
● |
||||
Greece |
- |
- |
- |
- |
- |
- |
- |
- |
- |
Hungary |
● |
● |
● |
● |
● |
||||
Iceland |
● |
● |
● |
● |
● |
● |
|||
Ireland |
● |
● |
● |
● |
● |
● |
● |
● |
|
Israel |
- |
- |
- |
- |
- |
- |
- |
- |
|
Italy |
- |
- |
- |
- |
- |
- |
- |
- |
|
Japan |
● |
- |
- |
- |
- |
- |
- |
- |
- |
Korea |
● |
● |
● |
● |
|||||
Latvia (Riga) |
● |
● |
|||||||
Lithuania |
● |
● |
● |
● |
|||||
Luxembourg |
● |
● |
● |
||||||
Mexico |
● |
||||||||
Netherlands |
● |
● |
● |
● |
● |
● |
|||
New Zealand |
● |
● |
● |
● |
● |
||||
Norway |
- |
- |
- |
- |
- |
- |
- |
- |
|
Poland |
● |
● |
● |
● |
● |
||||
Portugal |
● |
● |
● |
||||||
Slovak Republic |
● |
● |
● |
||||||
Slovenia |
● |
● |
● |
● |
● |
||||
Spain |
● |
● |
● |
||||||
Sweden |
● |
● |
● |
||||||
Switzerland |
● |
● |
● |
● |
● |
● |
|||
Türkiye |
● |
● |
● |
● |
● |
||||
United States (Florida) |
- |
- |
- |
- |
- |
- |
- |
- |
Note: Shaded cells indicate the quality elements that are covered in the particular mechanism. Dashes indicated unavailable data.
Figure 2.4. Mapping quality assurance mechanism and coverage by country
Copy link to Figure 2.4. Mapping quality assurance mechanism and coverage by country
External quality certificates are one such aspect. As mentioned above, some mechanisms require AET providers to hold an external quality certificate issued by an accredited body, be it a government or non-government organisation, such as ISO29990 or ISO9001. Table 2.8 lists the external quality certificates accepted in a sample of countries. This requirement is present in roughly 22% of mechanisms in our study, in Austria, Canada (British Columbia), Chile, Colombia, Costa Rica, Germany, Japan and the Netherlands.
Table 2.8. External quality certificates accepted in selected OECD countries
Copy link to Table 2.8. External quality certificates accepted in selected OECD countriesExternal quality certificates accepted in selected mechanisms
Country |
List of accepted quality certificates |
---|---|
Austria |
ÖNORM EN ISO 9001:2008, ISO 29990 and ISO 21001; EFQM, European Foundation for Quality Management; LQW, Learner-Oriented Quality Certification for Further Education Organisations by Art-Set Trademark QVB; EduQua; UZB; OÖ-EBQ; VET CERT-NÖ; S-QS; Wien-cert; |
Chile |
The Chilean Quality Standard for Technical Training Organisations, NCh2728. |
Germany |
ISO 9001; DAkkS Deutsche Akkreditierungsstelle GmbH |
Japan |
ISO 29990; ISO 29991 |
Netherlands |
ISO 9001; CRKBO; CROHO |
Another quality aspect of interest is leadership and management. This dimension covers whether, AET providers are expected to foster an environment where the leadership enables the effective management and operation of services. This includes well-documented policies and procedures, well-maintained records, shared values, and a clear direction to enable a customer-oriented and efficient service. This quality assurance measure is adopted by around 42% of mechanisms, in Australia, Belgium (Flanders), Chile, France, Finland, Ireland, Lithuania, the Netherlands, New Zealand, Slovenia, Switzerland, the Republic of Türkiye (hereafter “Türkiye”) and the United Kingdom (England).
Ongoing monitoring encompasses the continuous tracking and periodic review of programmes by AET providers to ensure that they achieve the objectives set for them and meet the needs of learners and society. It considers whether AET providers collect, analyse and use relevant information for the monitoring of their programmes and other activities, leading to continuous improvement. This quality area is observed by approximately 45% of mechanisms, including in Australia, Chile, Estonia, Finland, France, Iceland, Ireland, Korea, New Zealand, Portugal, Slovenia and Türkiye.
The organisational structure quality area covers whether AET providers are expected to ensure that their internal structure, including human resources, facilities and equipment, is suitable for service operation. Staffing arrangements should support learners’ continuing education and training, with spaces, buildings and fixtures being safe and well maintained. This quality area is part of the assurance process in around 71% of mechanisms, including those observed in Belgium (Flanders), Colombia, Estonia, Hungary, Ireland, Poland, Slovenia, Spain, Sweden and Switzerland.
The public information quality area requires AET providers to publish clear, accurate, objective and up-to-date information about their activities, including programmes. This measure is included in the quality assurance systems of just 23% of mechanisms, including those observed in Hungary, Iceland, Ireland, Luxembourg, the Netherlands and Switzerland.
As mentioned above, some quality assurance mechanisms require AET providers to have a quality management system (QMS) in place to ensure consistent quality in their educational programmes and training services. This requirement is adopted by roughly 45% of mechanisms, such as the cases of Belgium (Flanders), Canada (British Columbia), Chile, Colombia, Germany, Hungary, Iceland, Poland and Sweden.
Regulatory compliance pertains to the adherence of AET providers to laws and regulations relevant to their operations, set forth by local, regional or national governments. The specific requirements can vary, often depending on the type of providers and programmes delivered and can include ensuring that AET providers comply with relevant education laws and regulations, employment and labour laws, anti-discrimination laws, and privacy and data protection laws. This quality area is part of the quality assurance process in approximately 36% of mechanisms, including those examined in Australia, Austria, Canada (British Columbia), Germany, Korea, Luxembourg, Poland, Türkiye and the United Kingdom (England).
With respect to teaching staff, AET providers are expected to ensure the quality of the methodological-didactical competences of their teachers and trainers. This includes applying fair and transparent processes for the recruitment and professional development of the staff. This quality area is covered by 61% of the mechanisms under study, including the cases of Australia, Belgium (Flanders), Colombia, Denmark, Estonia, Latvia (Riga), Lithuania, Luxembourg, the Netherlands, the Slovak Republic, Slovenia, Spain and Switzerland.
Finally, with regards to the training design and delivery quality area, AET providers are generally required to ensure that there are appropriate processes in place for the development of training programmes, including planning, design, organisation, development and training assessment. Providers must ensure that programmes are relevant and meet students’ educational needs and that student support and academic assistance is provided. This area is an important aspect of the quality assurance process in around 94% of mechanisms, making it the most widely adopted quality area. This includes mechanisms observed in countries such as Australia, Belgium (Flanders), Canada (British Columbia), Chile, Colombia, Costa Rica, Austria, Finland, Germany, Poland, Portugal, Slovenia, Spain, Sweden and Switzerland.
It is observed that on average, mechanisms cover approximately four out of the nine potential quality areas. This average, however, masks substantial variation across mechanisms, suggesting a diverse approach to quality assurance in adult education and training globally. For example, Ireland’s mechanism stands out as the most comprehensive, covering eight observable quality elements. And mechanisms in countries such as Chile, France, Iceland, the Netherlands and Switzerland cover six. At the other end of the spectrum, mechanisms in countries such as Denmark, Latvia and Mexico only cover one or two areas.
Assessment tools and methods used
A range of assessment tools and methods are adopted by AET quality assurance mechanisms across OECD countries (Figure 2.5). This section explores these practices and their prevalence, providing a comprehensive understanding of the current practices in quality assurance. The framework identified six distinct practices, namely, analysis of performance indicators, expert reviews, interviews, self-assessment questionnaires, site visits, and surveys, as defined in Table 2.5 above.
One prevalent practice observed is the use of site visits, employed by 21 out of the 34 mechanisms with available data (around 62%). Site visits, involving direct inspection of AET providers, are used by a diverse range of mechanisms, including Australia, Belgium (Flanders) and Denmark, suggesting their universal appeal.
Self-assessment questionnaires, allowing AET providers to conduct an introspective evaluation of their performance, are utilised by seven mechanisms (35% of the mechanisms with available data). Notably, a combination of site visits and self-assessment questionnaires is a common practice, as seen in the mechanisms examined in Australia, the Netherlands and New Zealand, among others.
Interviews, used for gathering qualitative insights from stakeholders, are employed in four mechanisms (18%), in Australia, Belgium (Flanders), Israel and the United States (Florida). In some countries, these mechanisms also include expert reviews, where external experts assess quality.
The analysis of performance indicators, a method measuring quantitative parameters such as enrolment and graduation rates, learner satisfaction and learning outcomes, is the least commonly used assessment tool, used only by the mechanisms examined in Finland, Korea and the United Kingdom (England).
Surveys, despite their ability to reach a wider audience than interviews, are used by only four countries: Australia, Denmark, Spain and the United Kingdom (England).
The mechanisms in our sample commonly use one or two assessment tools and methods (1.34 on average across the sample), but several mechanisms take a more integrative approach that combines multiple mechanisms. Ireland, for instance, employs a combination of expert reviews, self-assessment questionnaires and site visits, while the United Kingdom (England) combines the analysis of performance indicators with interviews, site visits and surveys.
In terms of correlation with the type of quality assurance mechanism, site visits are very common in certification mechanisms, seen in the cases of countries such Canada (British Columbia), and Chile. Not surprisingly, self-assessment questionnaires are commonly associated with self-assessment type mechanisms, as observed in Italy and Lithuania but these questionnaires are also required under some certification and quality inspection mechanisms. Finally, expert reviews are used across diverse types of mechanisms, suggesting their versatile nature.
Figure 2.5. Mapping quality assurance mechanisms and tools and methods by country
Copy link to Figure 2.5. Mapping quality assurance mechanisms and tools and methods by country
Note: Countries with no connections indicate that there is no available data for the quality assurance mechanism in those particular countries.
Outcomes and validity types
In terms of outcome types, the “pass-fail” grading system emerges as the most common, being used in 22 out of the 26 mechanisms where data are available (approximately 84%; Figure 2.2). This includes for example Canada (British Columbia) and France. A less frequent outcome is a grade (15% of the mechanisms where data are available), used in the mechanisms studied in Finland, New Zealand, the United Kingdom (England) and the United States (Florida).
In terms of the validity period of the outcome, a fixed-term validity is the most common practice among the 22 mechanisms providing data (around 63%). The length of the validity period, however, varies considerably. To better understand this, we computed an average validity duration, considering only those mechanisms where specific years were provided, and multiple periods were averaged. The average validity period is approximately 3.3 years. The period in Ireland, at 5 years, is in the higher range, while mechanisms in countries like Canada (British Columbia) and Latvia (Riga) have shorter validity periods of 1 and 2 years, respectively.
The type of quality assurance mechanism seems to have a correlation with the grading system and its validity. Certifications, for instance, predominantly employ a pass/fail outcome and have a fixed-term validity, seen in countries like Chile and France. On the other hand, quality inspections, as seen the cases examined in the United Kingdom (England), and the United States (Florida), often result in a grade outcome.
Incentives to providers for participating
The analysis of the collected data presents a comprehensive overview of the benefits that AET providers can accrue by participating in quality assurance across OECD countries. The benefits are categorised into five distinct factors: eligibility for public funding, display of quality label, being listed in the registry of providers, licence to operate and, in the case of quality awards, receiving an award.
The most prevalent benefit among the mechanisms with available data is eligibility for public funding, with almost 50% of the mechanisms, as observed in Colombia, Estonia, France, Greece, Japan, Luxembourg and Portugal, offering this benefit to AET providers. The second most common is the opportunity to display a quality label which is offered in 43% of the mechanisms, including those in Canada (British Columbia), Colombia, Finland, France, Iceland, Japan, Korea, Luxembourg, the Netherlands and Portugal.
Being listed in the registry of providers is offered by 35% of the mechanisms with available information, including Colombia, Estonia, France, Japan, Luxembourg and Portugal. Finland is the only mechanism examined with a quality award system and consequently the only one offering the quality award benefit. A licence to operate also a less common benefit, only provided by the systems examined in Czechia, Hungary and Latvia, or 11% of the mechanisms.
There is a notable correlation between the type of quality assurance mechanism and the benefits provided. For example, certification mechanisms, employed by countries such as Canada (British Columbia) and Chile, are most commonly associated with the benefit of a listing in the registry of providers.
The data suggest that AET providers can derive significant benefits from participating in quality assurance processes. However, these benefits vary widely across mechanisms and are influenced by the specific type of quality assurance mechanism in place. Policy makers should consider these findings when designing or refining their quality assurance systems, ensuring that they provide substantial incentives for AET providers to participate and thereby improve the quality of adult education and training.
Conclusions and policy implications
Copy link to Conclusions and policy implicationsMain findings
In conclusion, this chapter has provided an analytical framework for examining quality assurance mechanisms in AET across OECD countries. The findings highlight the diversity of existing practices and offers insights into the variety of quality assurance mechanisms used for ensuring quality, the types of supervisory authorities employed, the data used for making assessments and methodologies for collecting that data, and the range of quality elements that are assessed, among others. This research contributes substantially to international peer learning on the topic of quality assurance of AET, serving as a resource for policy makers considering reforms in their quality assurance systems. Ultimately, this study marks the first attempt to provide an international view of quality assurance systems.
The framework is composed of four overarching macro-dimensions: i) key features; ii) assessment processes; iii) outcomes; and iv) benefits to providers. Each macro-dimension is broken down into more disaggregated dimensions, which refer to specific and fundamental aspects of the quality assurance mechanism, including the type of assessment, responsible body, quality elements covered, assessment tools and methods, and benefits for providers, among others. Each dimension is operationalised through a series of qualitative and quantitative indicators, which are specific measures that allow for an in-depth examination of the dimension and provide a basis for international comparison.
It is worth acknowledging that countries may have multiple quality assurance mechanisms in place, but for the purposes of this study, only one mechanism per country was considered and analysed. It is also important to note that for some countries, the available public information was not as comprehensive as needed for the analysis. These information gaps unfortunately restrict the depth and scope of the analysis.
Based on the sample examined in this study, certification is the predominant quality assurance mechanism, with approximately 63% of countries implementing this approach. Quality inspection trails as the second most prevalent quality assurance mechanism, used by around 18% of countries. In terms of supervisory authorities, government agencies constitute the majority, overseeing quality assurance mechanisms in 47% of countries. Close behind are ministerial departments, which hold responsibility in 42% of countries. Conversely NGOs are the minority, managing quality assurance in only 11% of countries.
With respect to assessment tools and methods, site visits are the most common mechanism, used by 14 out of the 32 surveyed countries. Self-assessment questionnaires are used by a total of seven countries, whereas interviews are practised in only four countries. Expert reviews, analysis of performance indicators and surveys remain comparatively underused tools.
Concerning the variety of quality elements examined, countries blend different quality elements in unique configurations. For example, the area of training design and delivery is widely covered, with approximately 63% of mechanisms assessing this aspect for quality. Additional elements of quality that are widely covered include leadership and management, ongoing monitoring, and staff training.
Eligibility for public funding and the opportunity to be included in registry listings are the primary benefits for AET providers engaging in quality assurance processes.
This study is a useful resource for countries planning to review and/or introduce changes to their quality assurance systems. By showcasing what other countries are doing in this field, it highlights international practices that could inspire local reforms. Hence, the findings offer a starting point for understanding and adopting successful strategies, thus supporting peer-learning opportunities among OECD countries.
Policy implications
The landscape of AET is experiencing a remarkable shift. Megatrends such as technological change and the green transition, and events like the COVID-19 pandemic have highlighted the importance of lifelong learning. In response to these trends, countries across OECD have increased their support to enable individuals to undertake further training and to engage in learning activities.
Among the various support measures, one has captured the attention of policy makers across OECD and European countries: individual learning schemes (ILSs). These schemes, which include individual learning accounts (ILAs) and vouchers, are designed to promote lifelong learning by making education and training more accessible and affordable for individuals. For example, ILAs provide financial resources, often through government funding or employer contributions, that individuals can use to pay for learning activities such as training courses or education programmes. ILAs place the purchasing power directly in the hands of the learners. By doing so, ILAs provide individuals with the autonomy to choose the learning and training options that best suit their needs and aspirations.
Individual learning schemes can foster an increase in the supply and diversity of training. By providing individuals with financial resources, ILSs enhance accessibility and affordability, expanding the pool of potential learners. This, in turn, encourages a greater number of training providers to participate, promoting market competition and driving providers to improve the quality and relevance of their offerings. ILSs also stimulate the entry of new providers, fostering innovation and introducing diverse perspectives and approaches to training. Additionally, ILSs can be targeted towards under-represented groups, addressing equity concerns, and further diversifying the range of training options available and enabling individuals to access a broader array of learning opportunities.
In this context, it is essential to ensure that the financial investment in supporting lifelong learning yields high returns by maintaining high-quality training standards that effectively support individual learning and career goals. This prompts an essential question: what are the most effective quality assurance mechanisms for an efficient, scalable and robust quality assurance system capable of effectively handling growing numbers of training programmes and providers? Though this question is posed rhetorically, it emphasises the need for new research to navigate towards evidence-based answers and policy recommendations.
Scaling different types of quality assurance mechanisms as provider numbers grow
When faced with a significant rise in the number of providers and programmes, it is crucial to ensure that the quality assurance mechanism in place can effectively handle the potentially large volume. It is important to underscore that no single quality assurance mechanism will function well for all quality assurance needs. Each has its unique strengths and can be more or less suited to different contexts. These mechanisms are tools in a policy maker’s toolbox, best used in combination depending on the particular circumstances of a country or provider.
Ensuring a consistent and efficient approach to certification
Certification is an important mechanism for ensuring quality across numerous ILA providers. The main advantage is that by establishing explicit quality standards, providers can undergo certification by a third-party organisation. This approach ensures consistency and enables streamlined evaluation processes. Standardised criteria and efficient procedures help manage a larger volume of providers seeking certification while upholding minimum quality standards. However, certification processes can be time-consuming, costly and administratively burdensome, especially when dealing with a large number of providers. Managing a high volume of certification applications and assessments can strain resources and potentially lead to delays in the certification process.
Empowering providers through self-assessment
Self-assessments may be useful when there are significant numbers of providers. Through self-evaluation, providers assess their services and operations using predetermined quality indicators or benchmarks. Empowering providers to take responsibility for their own quality improvement efforts reduces the burden on external evaluators. Scalability and efficient management of a larger number of providers could be achieved through online platforms or automated systems, facilitating streamlined self-assessment procedures. However, relying solely on self-assessment may raise concerns about objectivity and consistency in evaluating the quality of training. With a large number of providers, ensuring uniform adherence to quality standards through self-assessment alone can be challenging. The lack of external validation may also impact the credibility and assurance of the self-assessment process.
Managing cost through a sampling approach to quality inspections
Conducting comprehensive quality inspections for every provider becomes challenging with a large volume. Hence, a sampling approach may be more cost-effective. Quality inspections can be performed on a representative sample of providers chosen through statistically valid sampling methods. This method ensures reasonable assessment of overall quality without inspecting each provider individually. However, selecting an appropriate sample that truly represents the overall quality of all providers can be challenging. There is a risk of overlooking potential quality issues in the providers not included in the sample, potentially compromising the effectiveness of quality assurance.
Balancing benefits and costs of quality awards
Finally, quality awards have the ability to raise awareness about quality standards within the AET sector. By acknowledging and rewarding those who excel in maintaining high standards, quality awards encourage a culture of quality and continuous improvement among all providers. They stimulate competition and motivate providers to strive for better performance. Quality awards can also serve as an effective marketing tool, enhancing the reputation of the awarded providers and increasing their appeal to potential learners. However, managing quality awards can be challenging, especially with a growing number of providers. The process of organising contests and competitions, and conducting thorough expert reviews, can be time consuming and resource intensive. Additionally, while they acknowledge top performers, quality awards may not provide a comprehensive view of the quality standards across all providers, as not every provider would be assessed or awarded. Hence, the use of quality awards as the sole quality assurance mechanism could potentially lead to gaps in monitoring and make it harder to ensure overall quality consistency.
As we consider the various quality assurance mechanisms, it is evident that each has its strengths and limitations, especially with the expansion of providers. A combined approach may offer a more comprehensive solution. This discussion underscores the need for deeper research in this area, including in-depth case studies, to gain a comprehensive understanding of the dynamics at play.
References
[1] OECD (2021), Improving the Quality of Non-Formal Adult Learning: Learning from European Best Practices on Quality Assurance, Getting Skills Right, OECD Publishing, Paris, https://doi.org/10.1787/f1b450e1-en.
[2] OECD (2021), Strengthening Quality Assurance in Adult Education and Training in Portugal, OECD, Paris, https://www.oecd.org/portugal/Strengthening-Quality-Assurance-in-Adult-Education-and-Training-in-Portugal-Implementation-Guidance.pdf.