AI has the potential to significantly transform the way civil servants are organised and managed. Like in other policy areas, AI tools could be used to better target and personalise human resources (HR) services, accelerate and improve HR processes to reduce administrative burdens, and boost employee productivity. Some governments are already experimenting with applying AI to discrete HR functions, including recruitment, learning and employee development processes (e.g. training). This kind of application is generally focused on process automation with the objective of ensuring faster and more accurate HR processes. AI’s potential in this field could be much greater. Given the right data, AI systems could eventually help better match people to jobs, and predict performance based on job requirements and individuals’ characteristics and backgrounds. However, many elements need to be in place for this to work. For example, public administrations would need to have strong data in at least three areas: the characteristics of their workforce, the demands of specific jobs, and indicators of individual performance. Unfortunately, most public administrations lack strong data in all three areas. To unlock this potential, governments will need to invest in more and better HR data, as well as associated skill sets in the human resource management (HRM) function.
Governing with Artificial Intelligence
AI in civil service reform
Copy link to AI in civil service reformCurrent state of play
AI could be applied to a wide range of strategic HRM processes and activities, finding patterns in civil service data to inform more strategic workforce planning, better target HR policies or even identify needed skills during emergencies and crises. Available data and use cases indicate two areas of civil service reform where some countries were actively experimenting with AI applications: recruitment; and learning and development. These are explored and discussed further in this section.
So far, initiatives have often been fragmented pilots without a strategic approach for systemic adoption. However, governments like France are taking a coherent and strategic approach to explore the benefits of AI in civil service reform, while also recognising the potential negative impacts and setting up safety measures to address them (Box 5.19).
Box 5.19. France’s strategy for using AI in government human resource management
Copy link to Box 5.19. France’s strategy for using AI in government human resource managementFrench has developed a structured strategy to integrate AI into human resource management (HRM) across its public administration. This strategy focuses on three key areas: AI integration, workforce planning and training for civil servants. It aims to ensure that AI is used responsibly, ethically and effectively, enhances productivity and supports complex decision-making processes.
The strategy first focuses on identifying the right stakeholders, tasks and tools for AI adoption within HR activities. It includes defining clear objectives for AI use, selecting reliable AI tools and establishing methodologies to ensure transparency and accountability in AI-driven processes. The plan also incorporates risk mapping and internal audits to monitor AI’s impact and help ensure its ethical use.
The strategy highlights the need for a strategic workforce planning approach specifically tailored to integrating AI in HR processes. The adoption of AI tools to perform tasks traditionally carried out by public servants has significant and multifaceted implications for HRM. To ensure a smooth transition and sustainable workforce planning, it is essential to anticipate and assess these impacts effectively. This approach involves embedding AI-related workforce considerations within existing HR planning frameworks, with a focus on:
Defining the tasks delegated to AI, ensuring alignment with operational and strategic objectives;
Assessing current and future skill requirements, including AI-specific competencies and complementary expertise;
Evaluating the impact of AI on HR professions and job roles, identifying areas for upskilling and role transformation; and
Attracting and recruiting AI-literate professionals to support, guide and oversee the responsible use of AI in HR management.
To support this integration, the French government is emphasising AI training for public servants, particularly managers and HR professionals. It has created comprehensive training programmes to upskill employees and developed ethical guidelines that merge HR and digital ethics. This approach helps ensure that AI is applied in a way that balances technology with human oversight, while preparing the workforce for the future of AI in public administration.
Improving recruitment processes
AI can support civil service reform throughout the recruitment process, making it faster and more efficient. AI tools can automate transactional tasks such as writing job descriptions, designing tailored assessment methodologies, checking candidate background documents (e.g. university diplomas), and responding to candidate queries. In Singapore, for example, 10 government agencies introduced an AI recruitment service to automate repetitive tasks in the pre-screening process, such as reviewing and screening applications. A custom-designed chatbot also proceeds with a written test, reviewing and scoring the candidates’ written component. The service significantly reduced the agencies’ workload, making the process more efficient and effective.1 The United Kingdom has developed particularly solid efforts in this area (Box 5.20).
Box 5.20. AI-enabled recruitment automation in the United Kingdom
Copy link to Box 5.20. AI-enabled recruitment automation in the United KingdomThe UK Revenue and Customs Agency (HMRC) uses an AI-enabled platform called Outmatch to automate the recruitment process from end-to-end for some junior roles. Candidates are asked to record their answers to six questions linked to a competency framework. The AI tool then analyses their responses and scores them. The tool is designed to deal with high candidate volumes by automating the assessment and interview stage.
One area of keen interest is the use of AI tools to assess, simplify and redefine job descriptions to attract the best candidates. HMRC is also exploring how to assist hiring managers with an AI tool capable of generating job descriptions, interview questions and social media posts, whilst another prototype enables analysis of regional employment markets to support tailored recruitment campaigns. The Cabinet Office has developed a proof of concept, Job Advert Optimiser (JAO), that translates outcomes from previously successful job descriptions with similar aims into advice. This will allow hiring managers to tailor their job descriptions to target high quality candidates with the right skills and experience. Consideration is also being made to where AI may fit into other aspects of recruitment such as supporting candidates — from when they identify a role through to job acceptance — sifting large numbers of applications, scheduling and planning interviews, and matching candidates on reserve lists to other available roles.
In addition, AI applied to public service recruitment holds great potential to broaden candidate pools and improve candidate screening. Public administrations often struggle to be more proactive at recruitment, to address skills gaps in their administrations and proactively market job opportunities (OECD, 2021[49]). Examples here include one from Canada in Box 5.21. In addition, in Sweden, Upplands-Bro municipal government developed a tailored AI interview robot with a private AI-consulting company to ensure a more accurate recruitment process while enhancing efficiency. While excluding data regarding age, sex, clothing and looks, the AI bot performs blind interviews and evaluates the first-round candidates.2
Box 5.21. Increasing representation of visible monitories in Canadian defence leadership
Copy link to Box 5.21. Increasing representation of visible monitories in Canadian defence leadershipIn September 2020, Canada’s Department of National Defence (DND) launched an external EX-01 pilot recruitment campaign aimed at increasing the representation of visible minorities at the senior level. This initiative piloted new approaches while embracing inclusivity, innovative methods and technology in order to make fundamental and long-lasting change that helped increase representation and Diversity and Inclusion (D&I) at the DND. Key objectives of the pilot were to:
identify opportunities for members of visible minority groups;
introduce novel tools and technology to support barrier and bias-free assessments; and
assess the pilot process against traditional approaches to identify systemic barriers and biases and identify recommendations for future recruitment processes.
In partnership with various stakeholders, proper guardrails were implemented to ensure privacy and quality assurance. The process achieved key outcomes by facilitating an objective and fair assessment of candidates, while removing biases and barriers at the various stages. As a result, this pilot created career advancement opportunities and improved traditional approaches and processes to achieve better outcomes.
Facilitating learning and development
Civil service reform can use generative AI to create learning content, such as learning modules and course material based on source documents and information. Efforts are already taking root around the world. AI can also be used to make personalised recommendations, and pathfinding for learning and professional development through a complex and large amount of information and data (Johnson, Coggburn and Llorens, 2022[50]). Examples in these areas include:
The Australian Public Service Commission (APSC) recently collaborated with IBM in 2023 to design, structure and deploy a system that generated course content based on documents inserted from users (Box 5.22).
Spain’s National Institute for Public Administration (INAP) is incorporating AI into the organisation, cataloguing and search functions for its digital platforms, learning offerings and broader library. The resulting “knowledge graph” makes a large portion of the resources easily findable, searchable and shareable with Spanish civil servants, partner countries and the broader public.
Korea’s Ministry of Personnel Management’s learning and development platform incorporates AI-enabled functionality to help sort, organise and recommend content. The AI system being implemented is intended to analyse the user’s role and learning history to recommend training and material to develop certain skills from the platforms vast catalogue of 1.4 million pieces of content (OECD, 2023[51]).
Box 5.22. Generating learning content in Australia
Copy link to Box 5.22. Generating learning content in AustraliaThe Australian Public Service Commission (APSC) ran a six-week pilot project to use AI to design, structure and deploy an online learning course on digital skills for leadership. The pilot system allowed practitioners to “feed” AI a variety of information materials such as articles, books and speech transcripts into a closed system of information that was used to create the course content. The system created a course outline, objectives, modules and content, followed by a quiz. Pilot findings showed that around 60-70% of what the system produced was usable, relevant, well-structured and accurate. In a survey of users, 91% found the pilot output valuable. To increase these figures, the practitioner creating the content could give feedback to the AI system to adjust it.
There were several benefits to this initiative. Firstly, the drafting was very, very quick. Secondly, the information was drawn from a closed system, which eliminated the uncertainty around where information was coming from and if it was correct. The technology highlighted areas of the modules and pointed to where the information was sourced. Thirdly, the system also had programmatic “checkers” built-in to review the material for issues of concern, such as discriminatory language.
While the pilot system could draft course content very quickly, it did not create production-deployment ready content. Issues identified from the pilot were:
AI can synthesize existing content well but cannot create content that does not exist. For example, it could not write a course on the use of AI in the civil service at that stage as no content existed.
The reliability of the content still required expert review, and this is typically where bottlenecks in content production already exist.
The closed system provided more accurate results; however, it is an expensive option if only used for course production.
Post- pilot, the APSC recognises there is value in on-premises, closed AI systems to assist with content creation. It has however referred further investigation of on-premises AI to another Australian government agency, Services Australia. Further observations on Australian government agencies’ use of AI can be found on their AI statements.
Source: Information provided to the OECD by the Australian Public Service Commission, https://www.apsacademy.gov.au/news/piloting-generative-ai-address-aps-skills-gap. APSC’s AI statement (https://www.apsc.gov.au/initiatives-and-programs/workforce-information/research-analysis-and-publications/state-service/state-service-report-2023-24/fit-future/supporting-safe-and-responsible-use-artificial-intelligence), Services Australia’s AI statement (https://www.servicesaustralia.gov.au/automation-and-artificial-intelligence-ai-use).
Evidence of impact
Given the very nascent nature of the applications described above, it is still too early to provide empirical evidence of impact. Early experiments like those discussed above show significant potential to reduce time and effort required to handle large volumes (e.g. number of applicants, amount of existing learning material), broaden candidate pools, and reduce human error in decision-making. However, in most cases, there is a notable lack of rigorous, empirical evidence demonstrating their effectiveness and impact. Many AI implementations are based on theoretical potential or anecdotal success stories rather than robust, scientific evaluations. Addressing this limitation requires a concerted effort to design and conduct rigorous evaluations of AI applications in government HRM. Future implementations should be grounded in empirical evidence and tailored to the specific needs and constraints of government organisations.
Furthermore, there are many unanswered questions regarding potential negative impacts. For example, AI hiring tools may help to reduce human error, depending on how they are used. Yet they may also limit the autonomy of hiring managers in ways that impact the quality of hire related to culture, fit in team, or other hiring decisions. They may also further limit managers’ abilities to build their own teams to achieve the results they need to achieve. The real quality of decisions made through AI systems are difficult to assess since it requires a longer-term view on performance and job fitness. Furthermore, there are limited baselines to measure against. Traditional (i.e. pre-AI) systems and processes face challenges in measuring and assessing this, and there are no agreed standard indicators, especially in systems without objective performance measures as can often be the case in public administrations.
As such, it may take a long time before governments can adequately measure the real longer-term impact of AI-driven decision quality versus those made by traditional systems or humans. It will also be very hard to quantify AI’s impact on productivity of HR systems, as very few standard comparable measures and benchmarks exist. The OECD is currently working with a core set of member countries to try to establish these kinds of indicators, which may help to track improvements driven by AI in the future.
Managing risks and challenges
Associated risks
“Automation bias”
Inadequate or skewed data in AI systems
Misuse or questionable use of AI, resulting in surveillance and privacy concerns
Lack of transparency an explainability
In the domain of civil service reform there is a documented risk of so-called automation bias, whereby humans prefer not to second guess the results of automated decision aides, even when they have the ultimate responsibility and accountability to take the final decision. Making best use of AI for civil service reform will require upskilling strategic analytical capabilities in many HR activities and among hiring managers (Broecke, 2023[52]).
Another well-documented challenge relates to avoiding, detecting and addressing partiality in the AI systems themselves, particularly if AI is being used to inform decision-making related to job selection and career advancement. The problem is that any organisation’s historical data is based on past decisions made by humans, and there is a significant risk that AI systems will hard code these outlooks into their algorithms. Add to this the general lack of good employee data and performance indicators, and it becomes difficult to see how quality tailored advice could be given through such AI systems. Introducing any system in this area will require careful monitoring and evaluation mechanisms to detect and correct for bias (Johnson, Coggburn and Llorens, 2022[50]).
The data and privacy rights of public servants require particular attention in a public service context, where values such as merit and fairness guide recruiting, inherently limiting the types of data that can be used in AI systems. For example, some private sector recruitment tools regularly check applicants’ social media accounts and use this data to assess candidates. There is little empirical evidence that social media posts or physical features that may be assessed in video interviews have any real bearing on job performance. This raises both ethical and effectiveness questions about many AI-driven assessment tools currently available on the market (Broecke, 2023[52]).
The use of “algorithmic management” tools — software to automate aspects of management in, for example, the allocation of work schedules, the monitoring of work activities or the setting of worker targets — is increasing significantly, reaching an adoption rate of 90% in US firms and 79% in the European Union (Milanez, Lemmens and Ruggiu, 2025[53]). Up to now, government-specific studies have not been conducted. While some of these tools may help raise productivity when applied effectively, tangible concerns have been raised about existing negative impacts of AI and algorithmic tools on job quality, including work intensification, increased stress and perceived reduction in fairness (OECD, 2023[54]). AI could make jobs less fulfilling by incentivising new types of surveillance in the workplace that could harm mental health (APA, 2023[55]), or new forms of hyper-efficient yet exhausting “digital Taylorism” in which work is subject to increased surveillance and regulation (UC Berkeley, 2021[56]). AI task management may also have the potential to erode the autonomy and voice of workers, reducing human insights into how work is managed (Gmyrek, Berg and Bescond, 2023[57]). Many existing HR AI tools are designed to “optimise” workforce management (e.g. monitor, control, reduce autonomy in decision-making and problem solving), going against decades of science that shows how employee empowerment builds engagement, performance and trust. Some research suggests that framing AI as a tool used to support employees, rather than replacing them or limiting their autonomy, is critical for fostering positive perceptions of AI in the workplace (Brougham and Haar, 2017[58]).
The more complex AI systems and predictions become, the less they can be understood and explained. This reduces accountability if employers cannot explain their choices and it impedes the ability of employees to understand how to develop themselves to advance in their careers. Merit based recruitment systems are a bedrock of well-functioning public employment systems, and these require transparency and accountability to function appropriately. Employees and their employers need to clearly understand why appointment decisions are taken, and how the skills and performance of individuals are analysed (Cappelli and Rogovsky, 2023[59]).
Implementation challenges
Lack of high-quality data and the ability to share it
Explainability
Skills gaps
Quality data is essential to implementing AI in civil service reform. Unfortunately, OECD countries lack large amounts of data in most of relevant areas, and it is often not standardised systematically across organisations to allow for more rigorous and predictive analysis. Descriptive data is often limited to age, sex, education level and career path. Performance is very hard to assess objectively and consistently across teams and organisations. Job roles are also often categorised broadly. If AI is based on bad or incomplete data, it will make bad predictions.
Implementing AI in civil service management systems requires HR professionals with the right skills and mindsets. While AI technical skills may not be required, HR leaders would need to understand the potential application of AI to their systems and have the right skills to be smart buyers of tools on the market. HR professionals working with AI tools often need analytical capabilities to understand the principles of the tools and their use of data analytics, to interpret and challenge results. While abundant in public administrations, needed AI skills are often lacking in HR departments.
Untapped potential and way forward
Strengthened analytics for the present and future
Government organisations could gather a great deal of data and information on their employees that could be analysed to improve performance and employee experience; however, AI is generally under-used in the field of HRM due to several challenges. AI capabilities could allow HR practitioners and management to examine current workforce trends (ageing, skills and performance, compensation) to provide key insights into the main challenges and questions of the day: attractiveness or competitiveness of government as an employer; reskilling and upskilling needs; better targeted learning and development opportunities; or the drivers of employee and team performance and satisfaction.
AI algorithms can use time series data for predictive analytics to identify trends and make predictions on the civil service about the future. While simple regression analysis can be achieved without AI, more sophisticated operations could be developed with more complex modelling and scenario building. For example, organisations could potentially reduce employee turnover by predicting high-risk employees for attrition based on their tenure in a position, their levels of team engagement, and other factors. Through the analysis of big datasets, AI can identify factors and patterns that lead to excessive turnover — which is costly to organisations and a detriment to performance — and allow practitioners to take anticipatory steps for improvement. Aside from turnover, predictive analytics can assist workforce planning by anticipating skills or personnel shortages, predicting top performers for specific kinds of roles, supporting diversity and inclusion, or boosting engagement and well-being.
Currently such applications are extremely nascent in government workforces; the OECD was unable to identify many concrete use cases yet in this area. This is likely due to a variety of challenges, such as those listed above, as well as valid ethical and privacy concerns, which are further detailed below.
Way forward
AI tools can provide leaders with a new opportunity to develop a strategic vision and direction for their public service and the HR activities needed to achieve it. AI has the potential to reshape the workforce and augment its skills in many areas. In the field of HRM, AI can speed up HR processes, better target services, knowledge and recruitment/branding campaigns, and generate valuable insights for HR managers and senior leaders on a range of issues from future skills gaps to hiring effectiveness. AI can be a transformative tool in learning and development, bringing knowledge to public servants while building their essential skills and capabilities. This all depends on a clear vision for what the future public service should look like, backed by resources and capable HR teams. This implies a joined-up strategy for workforce development in which AI has a key role to play.
Enhance transparency and give employees an explanation and the ability to contest automated decisions. Studies suggest that many candidates and employees may perceive automated assessment processes as fairer than those conducted by humans if they understand how they work and why they are being used, so long as they are confident that a human will be accountable for the final decision (Broecke, 2023[52]). This transparency should include the inputs to the decision, ensuring care is taken to obtain informed consent and manage employee privacy issues. Governments should have a clear empirical basis for the input data they choose to use.
Include HR professionals and other employees in the design, implementation and evaluation of HR AI tools. Governments should be especially careful when introducing AI tools that may reduce employee autonomy. In some cases, AI-driven automation could reduce autonomy of workers and de-value their expertise, resulting in lower motivation, engagement and commitment. This is especially true when AI tools are directly applied to optimising their productivity, directing them in how to use their time and monitoring their work activities. This can have unintended adverse consequences; it may increase stress and anxiety, which could increase work absence and turn-over, thus reducing productivity in the longer run. These consequences can be avoided by including HR professionals and other employees in the design, implementation and evaluation of HR tools.
Upskill and reskill HR professionals for the age of AI. Automation stands to hold many advantages and reduce administrative burden for HR professionals and the employees they support. They can take over many of the repetitive and dull tasks leaving HR professionals to focus on more complex and higher value-adding tasks. Those can include dealing with complex cases, recruiting more specialised workers and developing strategy (OECD, 2024[60]). But this will require upskilling and reskilling within the HR profession in many cases. For some types of employees, governments will need to invest in upskilling in more technical areas, like data science and programming, help HR professionals understand and use AI systems effectively.
If AI systems are used as an input into candidate assessments, governments should have well-qualified humans interpreting the results and making the final decisions. Governments should take care to design the process in a way to minimise “automation bias”. This may include conducting traditional assessments first, and then adding AI information after, to provide additional insights on a short list of candidates. In this way, AI can help to audit recruitment practices and improve human decision-making without replacing it. This is essential to ensuring the right level of accountability necessary for merit-driven decision-making in public services.
While incorporating AI into the field of people analytics, governments should pursue the benefits of AI while accounting for costs and risks. They need to also have certain prerequisites in place, such as rigorous, trustworthy data, statistical capabilities for understanding and verifying AI-generated outputs and analysis and their weaknesses, and measures to protect employee privacy and avoid bias.