This chapter sets out how governments can realise AI’s potential while managing risks. It establishes three pillars — enablers, guardrails and engagement — that together form the OECD Framework for Trustworthy AI in Government. Enablers include governance, data, digital infrastructure, skills and talent, purposeful investment, procurement and partnerships; guardrails cover non-binding and binding instruments, transparency and risk management, and oversight; engagement spans citizens, civil servants and cross-border collaboration. The chapter calls for a systems approach, proportionate, risk-based application of measures, and practical mechanisms such as experimentation and impact assessment and auditing to support trustworthy adoption at scale.
Governing with Artificial Intelligence
4. Enablers, guardrails and engagement for unlocking trustworthy AI
Copy link to 4. Enablers, guardrails and engagement for unlocking trustworthy AIAbstract
Key messages
Copy link to Key messagesGovernance and policy initiatives can help governments to fully exploit AI’s potential and address its various risks and implementation challenges. Governments should take a systems approach and seek to anticipate future changes. Proposed measures should strengthen enablers, establish guardrails and engage with stakeholders.
Enablers include establishing key governance mechanisms and processes, understanding data’s role as the foundation for AI, building digital infrastructure, fostering skills and talent, investing purposefully, effectively using public procurement and expanding AI’s potential through partnerships.
Guardrails can be binding and non-binding policy levers, transparency processes and accountability mechanisms.
Engagement with stakeholders can take the form of citizen assemblies, engaging with civil servants, involving users in AI development and collaborating across borders.
Taken together, these policy measures form a Framework for Trustworthy AI in Government, which can help governments to align their actions with the OECD AI Principles. Future OECD work will address elements of the framework more in-depth.
Policy action to unlock AI’s potential
Copy link to Policy action to unlock AI’s potentialWhile the rest of this report discusses governments’ opportunities and challenges in governing with AI, this chapter focuses on how to realise success through governing AI in government. To fully leverage the potential of AI in the government while mitigating risks, governments need to take an intentional, strategic and responsible approach. This approach should align with the OECD AI Principles, but contextually specific and appropriate for the development and use of AI for and by governments. In particular, governments should pursue three courses of action:
1. strengthening enablers (e.g. quality data, digital and AI skills, funding and digital infrastructure) to overcome key implementation challenges and deliver the expected results
2. establishing guardrails (e.g. transparency, accountability and risk management tools) to anticipate and manage associated risks, and
3. fostering engagement with stakeholders (including the public) to develop AI systems that take their needs into account.
These actions aim to harness opportunities and address the various risks and implementation challenges associated with AI through targeted policy measures. For instance, issues related to the need for sufficient and quality data are addressed through enablers that focus on building robust data governance and infrastructure. In another example, insufficient guidance and outdated regulations can be addressed through guardrails such as binding and non-binding policy levers, including agile regulatory instruments, helping to ensure AI operates within clearly established ethical, operational and legal boundaries. Active stakeholder engagement throughout the AI system lifecycle (development, deployment and use of AI technologies) and the policy cycle (for designing, implementing and evaluating AI governance and policy) can enrich the understanding, attitudes and behaviours of stakeholders, including the public, and align technology and governance developments with societal needs.
The sections below seek to address the main points of attention for governments to create an environment that enables the strategic and responsible use of AI across government systems and functions. They provide a comprehensive analysis of specific policy actions and priorities along the three action areas. Each section outlines key policy options that governments should consider for a coherent and sustainable approach. Governments should consider these options in light of their own context, including their current digital government maturity. For instance, instead of seeking to put in place all of the items discussed at once, governments could consider progressive governance and AI adoption roadmaps based on institutional, cultural and technological capacities.
A holistic, systems approach can maximise the value of AI in government
In establishing the enablers, guardrails and engagement mechanisms discussed in this chapter, governments should take a systems approach to AI, seeking to understand and address public problems by viewing them as part of a larger, interconnected system rather than in isolation (OECD, 2017[1]). Traditionally, public policymakers have addressed social problems through discrete interventions layered on top of one another, building on a “cause and effect” relationship. However, these interventions may shift consequences from one part of the system to another, simply addressing symptoms while ignoring causes. AI represents an opportunity to reimagine how government works, in terms of both internal operations and public-facing services. Governments should think beyond how AI can fit within existing government systems and structures; they need to think about how AI can contribute to entirely rethinking processes and systems. Otherwise, governments run the risk of simply automating inefficiency and further reinforcing misaligned incentives and governance approaches. Emerging practices can assist, such as “sludge audits”, which are structured behavioural assessments of a process to identify frictions that result in people being less likely to complete the process or expend undue psychological effort while doing so (OECD, 2024[2]). The OECD has a dedicated line of work to systems approaches that can further assist.1
Governments should recognise the potential for and seek to anticipate future changes
There is still much to learn about AI and much remains unknown about its ongoing evolution. Establishing the enablers, guardrails and engagement mechanisms discussed in this chapter may only take into account today’s knowledge — what is known about the uses and implications of current AI and about the potential for tomorrow. Yet, there are major unknowns that will only be resolved over time as the technology develops and its potential uses explored. AI in government will be an ongoing journey of discovery, both welcome and unwelcome, and unexpected and unintended developments. Governments should employ an agile and adaptive approach to adjust to new opportunities and changing behaviours. Many tasks AI cannot satisfy today will likely become feasible in the future. AI strategies and frameworks should be flexible enough to evolve with changing capabilities and contexts. Governments need to improve their early engagement with weak signals that indicate how the future may transpire. This will enable them to understand where and when to best intervene, without waiting for processes and trends to become established, and thus expensive and difficult to shift.
These considerations are relevant not only for AI governance, but also for its use in government. The use cases discussed in this report (in-depth in Chapter 5 and synthesised in Chapter 2) generally represent incremental improvements and productivity gains. This, however, should not obscure that emergent or future uses of AI could be completely new or handle previously impossible, impractical or even inconceivable tasks and create new opportunities and risks for government. This can lead to use cases and governance approaches that will need to be created, reinvented or stopped. OECD efforts on Anticipatory Innovation Governance (AIG) and Strategic Foresight can help government better understand and shape potential AI futures.2
Strengthening enablers to facilitate the adoption of trustworthy AI
Copy link to Strengthening enablers to facilitate the adoption of trustworthy AIEnablers are the foundational elements and resources necessary for AI implementation in government.3 They create an environment where skilled public servants can effectively and reliably design and deploy AI. Their practical support allows government institutions to fully harness AI’s potential. The sections below review seven key enablers: governance, data (including open government data), digital infrastructure, skills and talent, AI investments, public procurement and partnering with non-governmental actors. These were initially defined by the OECD in 2024 ([3]) and are further developed in the sections below. Each section considers policy options governments can adopt to deploy these enablers in their contexts, drawing on international best practices.
Establishing key governance mechanisms and processes
Governments are accelerating their adoption of AI, and in most cases, have outlined their goals in national AI strategies. Yet, this adoption is often piecemeal in practice at a high-level, without establishing comprehensive and robust governance arrangements to ensure AI’s responsible use, long-term impact and sustainability. As AI becomes increasingly integrated into government operations, robust governance arrangements should be established domestically to ensure the trustworthy, sustainable and effective use of AI. They can also promote a clear narrative of the benefits of AI to build support within and outside government.
Using strong leadership for a cohesive vision
Strong leadership is a critical factor in achieving AI adoption in government. It is vital to setting the right tone from the highest levels of government and actively communicating the potential benefits of AI (Berryhill et al., 2019[4]). While establishing strategies and principles to help ensure trustworthy AI adoption is critical, solid and effective leadership can build a cohesive vision for AI and set a “tone at the top” that builds confidence in AI, both within and beyond government. For instance, in the United Kingdom (UK), the past two prime ministers (PMs), Rishi Sunak and Keir Starmer, have championed AI’s adoption both inside and outside government. Under Sunak’s leadership, the UK catalysed global attention and international collaboration on AI through convening the first global AI Summit in November 2023, with subsequent events organised by Korea and France. Each has concluded with a declaration signed by many governments outlining AI-related commitments.4 More targeted on government, the UK’s Incubator for AI (i.AI), which aims to improve lives, drive growth and deliver better public services, was also launched during Sunak’s tenure. Under current PM Starmer’s leadership, the UK (2025[5]) launched the AI Playbook for the UK Government (Box 4.2) and has put forth a bold plan to use AI in “reshaping the state to make it work for working people”, including through the creation of 2 000 tech and AI apprentices in government. Finally, in cascading strong leadership throughout UK government organisations, (2025[6]) “A blueprint for modern digital government” requires all public sector organisations to include a digital leader on their executive committee by 2026.
Those at the top have the power to set a strategic direction that can permeate levels below, helping to frame the use of AI within the culture at large. As stated in the OECD Framework for Digital Talent and Skills in the Public Sector (2021[7]), “leadership that creates an environment to encourage digital transformation will communicate a clear vision for digital government and actively champion its benefits. [Such] leaders will be engaged, visible and approachable, and empower their teams through decentralising decision making”.5 A study by the European Commission (EC) (2024[8]), based on a survey of 576 public managers in seven countries, found that leadership can especially influence AI adoption by offering robust incentives and/or financial resources to implement AI initiatives, with respondents generally finding the current state of these to be unsatisfactory.
Strong leadership for AI can foster a “mission-oriented” approach to its innovation. This approach emphasises a problem-solving focus, where policy interventions are designed to mobilise resources, coordinate stakeholders, and stimulate innovation and collaboration across government and sectors to tackle the identified challenge and meet set mission targets. Mission-oriented policies often involve a combination of regulatory measures, financial incentives, research funding and targeted investments to drive progress towards the mission. Leaders play a critical role by providing top-down direction and galvanising support in order to align all pieces of government to move in unison towards the same goal (OECD, 2021[9]).6 Some governments and intergovernmental organisations have taken a mission-oriented approach to AI policy, although these are generally aimed at catalysing economic growth in the private sector (UCL IIPP, 2019[10]; Vinnova, 2022[11]).
Taking a strategic and directed approach
Governments can implement whole-of-government strategies and guidance on AI to identify and prioritise its coherent use and development in line with overarching government values and objectives. For example, the Dominican Republic’s (2024[12]) 2023 national AI strategy focuses heavily on AI in government. Canada (2025[13]), Switzerland (2025[14]) and Uruguay (2021[15]) have each developed a dedicated strategy for AI in government, and another is under development in the United Kingdom (2024[16]). The April 2025 policy from the United States (US) on “Accelerating Federal Use of AI through Innovation, Governance, and Public Trust”, although not called a “strategy”, fits many of the hallmarks of a strategy that seeks to drive change throughout federal government and has the added benefit of being binding in nature (Box 4.1). Many others have substantive aims for AI in government to be embedded in broader national strategies. Overall, these high-level strategies tend to touch on many of the enablers, guardrails, engagement processes and types of use cases discussed in this report. Targeted strategies also exist to guide AI efforts in certain government functions. For instance, France has developed a strategy for using AI in HRM (see Chapter 5, Box 5.19), and the aforementioned US policy gives government agencies 180 days to develop and publish their own AI strategy.
Box 4.1. Accelerating federal government use of AI in the United States
Copy link to Box 4.1. Accelerating federal government use of AI in the United StatesIn the United States, pursuant to requirements of the AI in Government Act of 2020, on 3 April 2025, the White House Office of Management and Budget (OMB) issued M-25-21 Accelerating Federal Use of AI through Innovation, Governance, and Public Trust. The policy promotes AI innovation, responsible adoption and use of AI, and safeguarding American protections on privacy, civil rights, and civil liberties. According to the policy, among other things, agencies must:
Identify a Chief AI Officer (CAIO) to serve as a senior advisor, champion agency AI goals, coordinate AI efforts within in their agency, and represent the agency with coordination bodies and external fora; with OMB committing to convening an interagency CAIO Council to support coordination to maximise efficiencies.
Remain accountable by meeting reporting requirements, including updating an AI use case inventory at least annually.
Implement minimum risk management practices for AI that could have significant impacts when deployed (“high-impact AI”) in a manner proportionate to the anticipated risk from its intended use, as described further (Box 1.3).
Publish agency AI strategies for identifying and removing barriers to the responsible use of AI and achieving enterprise-wide improvements in the maturity of their applications (CFO Act agencies).
Strategies must include an assessment of the agency's current state of AI maturity and a plan to achieve the agency's AI maturity goals, by addressing plans and processes to, among other things, develop AI-enabling infrastructure (e.g. high-performance computing infrastructure) across the AI lifecycle; ensure access to quality data; develop enterprise capacity for AI innovation; recruit, hire, train, retain and empower an AI-ready workforce and achieve AI literacy for non-practitioners involved in AI; and develop the necessary operations, governance and infrastructure to manage risks from the use of AI.
Develop a GenAI policy that sets the terms for acceptable use of GenAI for their missions and establishes adequate safeguards and oversight mechanisms that enable GenAI to be used in the agency without posing undue risk.
Proactively share data and AI assets across the federal government, including custom developed code, including models, whether agency developed or procured, and to the extent practicable, release and maintain the code as open-source software in a public repository (some exceptions apply).
Develop and publish agency compliance plans to achieve consistency with M-25-21 (to be updated every two years).
Revisit and update where necessary internal policies on IT infrastructure, data, cybersecurity and privacy to align with M-25-21 and other relevant executive orders and laws.
Responsibly procure AI capabilities (Box 4.7).
Note: The policy generally applies to all Executive Branch departments and agencies, including independent regulatory agencies. Some parts of the policy apply only to “CFO Act” (Chief Financial Officers Act of 1990) agencies. National intelligence agencies and the Department of Defense are also excluded from some requirements. The source document includes more specific details on applicability.
In an attempt to move beyond strategy, some governments have developed comprehensive guidance. For instance, in addition to the AI Playbook for the UK Government (Box 4.2), New Zealand (2025[18]) has established a Public Service AI Framework, and Ireland (2025[19]) has published Guidelines for the Responsible Use of AI in the Public Service. Such guidance is helpful not only to implement strategies but can help overcome risk aversion in implementing more formal laws and regulations (see Guardrails discussion below) by removing the need for each organisation or team to make its own interpretations. As stated by the co-founder of Code for America, “well-meaning and well-written legislation originates at the top of a very tall hierarchy, and as it descends, the flexibility that its authors intended degrades. Laws often have an effect entirely different from what lawmakers intended because of this cascade of rigidity” (Pahlka, 2024[20]).
To further counteract risk aversion, guidance could promote experienced public servants’ use of judgement and discretion. It could also acknowledge that, as with any human action, leveraging AI cannot be risk free. Techniques such as behavioural science can be used to craft guidance and communications in a way that helps ensure the desired behavioural effect on users, maximising the value of AI adoption while mitigating risks, increasing the likelihood of meaningful behavioural change and responsible AI adoption (OECD, 2021[21]).7
Box 4.2. Artificial Intelligence Playbook for the UK Government
Copy link to Box 4.2. Artificial Intelligence Playbook for the UK GovernmentThe UK Government Digital Service (GDS), with the support of a variety of central government departments, private sector technology companies, academic institutions and users, published the Playbook in February 2025. It includes 10 principles alongside guidance on several issues.
10 principles to guide AI in government organisations
1. You know what AI is and what its limitations are.
2. You use AI lawfully, ethically and responsibly.
3. You know how to use AI securely.
4. You have meaningful human control at the right stage.
5. You understand how to manage the AI life cycle.
6. You use the right tool for the job.
7. You are open and collaborative.
8. You work with commercial colleagues from the start.
9. You have the skills and expertise needed to implement and use AI.
10. You use these principles alongside your organisation’s policies and have the right assurance in place.
Explainers and guidance
The Playbook includes informational material on what AI is, fields of AI, applications of AI in government, limitations, building AI solutions, building a team, defining the goal, buying AI, using AI safely and responsibly, ethics, legal considerations, data protection and privacy, security and governance. It also includes a series of AI uses cases that illustrate real-world efforts in the UK government.
Determining whether AI is the best solution
Guidance should exist that includes a focus on determining whether AI is the best solution for a given problem, thus taking a step back from focusing on AI. While AI has tremendous capabilities, it is not always the best solution and in many cases is inviable. A key finding from a recent report from the Ada Lovelace Institute (2025[23]) is that “there is a surprising lack of evidence on the effectiveness and impact of AI tools, even from a purely technical standpoint. Evaluating AI interventions in context is crucial to determining their performance and value compared to existing manual or traditional methods”. Work by RAND found that an inadequate of understanding of the problem to be solved and projects that use AI unnecessarily are two main drivers of AI project failure (2024[24]; 2025[25]).
A common issue with AI is that people start with solutions then look for problems for the technology to solve. In general, governments should seek to understand and focus on the outcomes both governments and citizens seek to achieve and the problems preventing that. Armed with this knowledge and priorities, they can then identify whether AI (or something else) is the best solution to help achieve these goals (Berryhill et al., 2019[4]; Mulgan, 2019[26]). Accordingly, governments need capacities for problem identification and understanding. They will also need to leverage other enablers presented in this chapter, including workforce skills for understanding AI’s strengths and weaknesses relative to other technologies and processes to engage users to understand their needs.
Some governments have built in considerations for this in government-wide guidance. For instance, the AI Playbook for the UK Government indicates that public servants should “be open to the conclusion that, sometimes, AI is not the best solution for your problem: it may be more easily solved with more established technologies”. The United Kingdom’s (2020[27]) Guidelines for AI Procurement advises AI procurement officials to “start with [a] problem statement” and articulate “why you consider AI to be relevant to the problem, and be open to alternative solutions”. In the United States, the AI Guide for Government ([28]) includes components to “focus on the root problem”, and to consider “Is it the best option to solve this particular problem? Have you evaluated alternative solutions?”
A processes for making these determinations could be integrated as part of an ex ante impact evaluation (see Guardrails section below on “Impact assessments”), or could be established as an independent process before entering the pipeline of AI projects.
Defining clear roles and responsibilities
Governments should define clear roles and responsibilities to facilitate the coherent development, use and potential scaling of AI. These roles and responsibilities should be defined and agreed upon with relevant stakeholders and assigned to government institutions, incorporating them into the institutions’ mandates. This enables a solid institutional structure to support the implementation of national strategies within and across individual institutions, and it facilitates accountability and oversight across the public administration. Several European countries are expanding the mandates of existing ministries or agencies to ensure coherent AI deployment. Examples include Norway’s Ministry of Digitalisation and Public Governance, and Spain’s Secretary of State for Digitalisation and AI within the Ministry of Digital Transformation and Public Management (OECD, 2024[3]). The United Kingdom has consolidated most AI roles and responsibilities across sectors into its Department for Science, Innovation and Technology (DSIT) (OECD/UNESCO, 2024[29]). At the individual level, the US policy in Box 4.1 requires federal government agencies to designate a CAIO to lead their AI efforts. Governments may also need to parse out various roles and responsibilities to achieve a variety of strategic objectives. For instance, in Chile and Colombia, the national AI strategies define responsible actors linked to each commitment as defined in the strategy. Colombia also details the time frames in which responsible actors need to achieve them, as well as budget and monitoring indicators (OECD/CAF, 2022[30]; CONPES, 2025[31]). In establishing roles and responsibilities, governments should make clear which entity or entities have the authority to establish policy over AI use in government. Indeed, OECD work with countries has uncovered confusion around who is responsible for rule-setting, potentially hindering digital transformation efforts.
Coordinating efforts within and across government
Governments can reinforce coordination and collaboration efforts to ensure a holistic approach to AI adoption and governance. Establishing inter-ministerial task forces or committees can facilitate decision-making, communication and collaboration across different institutions. These mechanisms enable all actors to take part in setting overarching objectives and work together towards achieving them. For instance, the US policy discussed in Box 4.1 requires the establishment of a cross-government CAIO Council. In another example, Australia, established a temporary AI in Government Taskforce (September 2023 to June 2024) to develop policy, standards and guidance to enable the safe, ethical and responsible use of AI in public service (OECD, 2024[3]).8 At the sub-national level, in the government of Dubai in the United Arab Emirates, 22 CAIOs from across the government are charged with leading and coordinating AI efforts (WAM, 2024[32]).
Coordination is also important transversally, across levels of government. Many AI applications have significant local impact, particularly in public service delivery and social welfare, as local governments are closest to citizens and residents. However, without coordination, fragmentation in AI approaches can emerge. Where municipalities and regional governments develop AI solutions in isolation, inefficiencies, duplication of efforts and inconsistencies in governance frameworks and user experience can result (Verhulst and Sloane, 2020[33]). Similarly, designing national strategies policies without consideration for local needs can result in approaches that are mismatched or unworkable for realities on the ground. Denmark has taken steps to address this challenge through its new Digital Taskforce for AI, which is working to scale AI adoption across all levels of government, ensuring alignment in priorities, standards and governance approaches.9 Similarly, in Sweden, AI Sweden and Vinnova launched the Collaboration for AI in Municipalities and Civil Society initiative (Kraftsamlingen) to help municipalities and civil society organisations integrate AI into their operations. Since 2022, this initiative has provided tailored support, including guidance on AI adoption and funding opportunities for concrete projects, fostering a more coordinated and effective AI ecosystem across local governments.
Creating spaces to experiment
Governments need to allocate time and space to explore using AI, as both experimentation and iterative learning are crucial to developing AI capacity (OECD/CAF, 2022[30]). In addition to helping promote learning and identify new possibilities and approaches, controlled environments for AI experimentation and testing facilitate the timely identification of potential technical flaws, behavioural biases of both AI systems and people using them, and associated governance challenges. Furthermore, through experimentation, AI systems can be incubated until the solutions are technically robust enough to scale up. In doing so, they can also highlight public concerns especially through testing under quasi real-world conditions (OECD, 2019[34]). Such approaches entail engaging stakeholders throughout the development phase, evaluating user needs, assessing data availability and quality, and continuously monitoring progress from the prototyping and piloting phases (OECD/UNESCO, 2024[29]). Such environments include innovation centres, labs and sandboxes. Experiments can operate in “start-up mode” — whereby they are deployed, evaluated and modified — then be scaled up or down, or abandoned quickly (OECD, 2023[35]). Beyond in-house experimentation, governments can also work with non-governmental actors, such as GovTech startups, to design and execute AI experiments (see “Turning to GovTech startups” below).
Additionally, these environments foster collaboration between government, academia and industry, promoting the exchange of ideas and accelerating the development of AI technologies. By simulating real-world conditions, these facilities enable rigorous validation of AI systems, ensuring they are robust, reliable and safe prior to deployment. This approach to testing and experimentation not only enhances the effectiveness of AI solutions but also builds public trust by validating these technologies through early identification and addressing potential risks, biases or inefficiencies before wide deployment.
For example, the EU, in collaboration with its Member States, has launched a network of permanent Testing and Experimentation Facilities (TEFs), including CitCom.ai, which focuses on smart cities and communities. The initiative accelerates the development of trustworthy AI in the European Union by providing innovators — both companies and public agencies — access to test and experimentation of AI-based products in real-world conditions. Other examples include:
In the United States, the Mitre Corporation, a government-funded research and development (R&D) centre, is developing an AI supercomputer to power a new AI sandbox, which will be capable of training new, government-specific advanced AI systems.10
In the United Kingdom, the Incubator for AI (i.AI) promotes AI experimentation and eventual scale-up through four key approaches: 1) prototyping to quickly test and evaluate ideas for AI applications; 2) delivery to scale up successful prototypes to relevant government teams where they can have an impact; 3) modularisation to share technical work across government, including open sourcing code; and 4) convening and advising to identify areas for sharing learning and products.11
In France, ALLiaNCE, an interministerial AI incubator launched by the Interministerial Directorate for Digital Affairs (DINUM) in July 2023, illustrates a government-led initiative to structure the experimentation and scaling of AI across core government functions.12 The ALLiaNCE incubator applies an agile, user-centric “product mode” methodology — originally developed by beta.gouv.fr — focusing on fast iteration, user feedback and measurable impact, thereby accelerating AI adoption in government. ALLiaNCE’s structured selection criteria — based on impact, mutualisation potential, user engagement and ethical compliance — demonstrates a rigorous approach to testing and scaling responsible AI projects.
In Australia (2024[36]), over 60 Australian Public Service (APS) agencies conducted a six-month trial of Microsoft Copilot. Over 7 700 public servants participated in the trial, the results of which were varied; however, aggregately, users experienced perceived improvements in the efficiency and quality of AI for summarisation and preparing a first draft of documents. Despite this, AI’s adoption requires concerted efforts to address technical, cultural and capability barriers.
Portugal and Spain have each engaged with GovTech organisations to promote experimentation with digital technologies, including AI, in justice administration.13 Also on GovTech but with a broader scope, Spain’s GovTechLab is an AI use case incubator that identifies scenarios where generative AI can have an impact on public administrations – whether by achieving greater efficiency in the provision of public services, reducing workloads or improving citizen service.14 Twenty out of the 300 identified use cases will be piloted in areas such as document classification, AI assistants and the preparation of tenders and grants. Those that are successful will be scaled and offered as a service to the entire administration.
In establishing small AI experiments and pilots, governments should consider their definition of success and establish measurements and evaluation framework to determine whether a project was successful, determine what worked and what did not and help capture and disseminate lessons learned. The United Kingdom’s Guidance on the Impact Evaluation of AI Interventions serves as a good example, providing considerations on AI projects from small-scale testing through full implementation.15
The OECD is currently developing a dedicated report on AI experimentation in government to review current practices and derive key lessons learned to help inform and guide policymakers in establishing their own experimentation guidance for their organisations (OECD, forthcoming[37]).
Creating a strong data foundation
Data serves as the foundational asset driving the capacity of AI to function, evolve and create public value. Drawing on the concept of “garbage in, garbage out,” AI performance directly correlates to the quality and representativeness of inputs it is trained with; AI systems often require vast amounts of data across the AI system cycle to deliver valuable outputs.
Relevant data can be derived from government, the private sector or other sources. This section mainly focuses on government data, though the OECD (forthcoming[38]) is conducting work that systematises the sources from which AI developers obtain data for AI training and highlights their main attributes.
Access to and sharing government data for AI brings complex data governance challenges. Governments face regulatory and operational hurdles, from safeguarding privacy, non-biased results and data security to navigating policy and legal frameworks for governing data sharing and use and intellectual property rights. Public sector organisations also have to tackle technical issues, from ensuring interoperability across data systems to building the technical capacity to manage data effectively.
Ensuring privacy, security and intellectual property rights
Governments are building frameworks, guidelines and mechanisms to promote strong data governance that safeguard privacy, intellectual property and security. These often result from collaborations among regulatory bodies, industry stakeholders and civil society. For example:
Korea’s Personal Information Protection Commission (2023[39]) has released a guide for personal information processing and AI development. This guide outlines the legal basis for processing personal data, establishes safety standards, and suggests measures to protect individuals' rights within AI systems.
New Zealand’s (2025[18]) Public Service AI Framework and the United Kingdom’s (2025[22]) AI Playbook for the UK Government each cover principles for safe and privacy-preserving use of AI in government.
In bridging the concepts of experimentation (discussed above) with personal data protection, France has established a data sandbox to provide an enabling environment for safe experimentation, coupled with training and hands-on support in managing personal data and ensuring regulatory compliance (Box 4.3).
Some governments are exploring privacy-enhancing technologies (PETs), such as data anonymisation, on sensitive data used for AI training. These technologies can, in turn, be enhanced through AI (OECD, 2024[40]).16
Box 4.3. France’s personal data sandbox for AI in public services
Copy link to Box 4.3. France’s personal data sandbox for AI in public servicesIn 2023, France's data protection authority (CNIL) launched a “sandbox” initiative to support innovation in AI for public services. This sandbox offers selected organisations expert guidance to help them navigate personal data regulations early in their project development. While it does not remove any legal requirements, it aids in identifying solutions to compliance challenges. Four projects were selected to receive guidance from CNIL's new AI department:
Albert (DINUM): assists civil service agents with a language model to improve responses to user inquiries, with pilots in "France Services" centres.
Job Intelligence (Pôle Emploi): provides personalised job search guidance using professional data to match job seekers with tailored services.
Ekonom AI (Nantes Métropole): offers water consumption insights and recommendations for residents, supporting ecological goals and potentially adaptable to other public policies.
RATP Video Project: develops AI to detect events through matrix data capture, ensuring privacy by design with no personal data collection.
Source: (CNIL, 2023[41]; [42]).
Ensuring representativeness in data
Ensuring AI systems are trained on representative data is crucial for delivering accurate and relevant outcomes. In some countries, different populations have unique languages and traditions. In others, different demographic or other contextual factors shape the data needed for AI to be effective. As described in the OECD Good Practice Principles for Data Ethics in the Public Sector (2021[43]), using data that is not representative to train AI can lead to significant issues, particularly for government applications that require fair and accurate policies and decisions that can tangibly impact the target population. These issues include biased algorithms and decisions, and an inability to develop tailored services and policies for groups underrepresented in data, as discussed in Chapter 1.
Governments are taking actions to address this issue. For instance, a number of countries have taken action to invest in efforts to promote representativeness in languages (OECD, 2023[44]; Peixoto, Canuto and Jordan, 2024[45])17. Examples include:
To promote language representation, the Common Danish Language Resource initiative and the Danish Platform for Danish Language Resources, led by the Danish Agency for Digital Government (2024[46]), aim to collect, develop and display language data and other tools that can support the development of Danish AI solutions.
Greece (2024[47]) is pursuing the development of a Greek Language and Culture Data Space, which focuses on integrating Greece's linguistic and cultural heritage into AI applications.18 Also related to the Greek language, the development of the “Meltemi” and “Llama-Krikri” LLMs represent promising steps in this direction, highlighting the importance of open access and collaborative efforts in expanding the available linguistic resources.19
India’s Bhashini platform, launched under the National Language Translation Mission (NLTM), is India’s flagship AI-led language infrastructure project. It supports real-time translation across 22 official Indian languages and dozens of dialects and provides support for multilingual voice assistants and AI service delivery interfaces.20 It was made possible by through a massive citizen engagement exercise to build multilingual datasets that boost data representativeness, with models and application programming interfaces (APIs) made available as open source.
In Saudi Arabia, the Saudi Data and Artificial Intelligence Authority (SDAIA) has launched ALLaM, an LLM developed with 500 billion tokens and over 300 000 Arabic texts, including encyclopaedias, scientific research and historical works (M Saiful Bari, 2024[48]). ALLaM aims to reflect the linguistic and cultural richness of the Arabic language.
Spain is working on a family of AI models, called ALIA, that are heavily trained on native Spanish and other official language data and will be available as open source.21 The first use cases include an assistant for diagnosing heart failure in the public health sector and an assistant to facilitate tax officials’ replies to citizens.
Language models for native American languages have also been developed. Although not developed by government, researchers have developed LakotaBERT to support language revitalisation efforts for “Lakota, a critically endangered language of the Sioux people in North America” (Parankusham, Rizk and Santosh, 2025[49]).
Enabling effective and trusted data access and sharing
As discussed in Chapter 3, governments often face a significant shortage of easily available, relevant, high-quality data necessary for training AI systems effectively (OECD, 2025[50]). Addressing this gap requires a focused effort on enhancing data access, including through collaborative data collection and open arrangements (Box 4.4).
Box 4.4. Sweden’s collaborative data gathering for Svea
Copy link to Box 4.4. Sweden’s collaborative data gathering for SveaSvea is a Swedish initiative coordinated by AI Sweden that unites government agencies, municipalities, regions and industry to address the challenges of creating AI solutions for public services. The primary focus is pooling resources to gather Swedish-language data that reflects the unique needs of government — a task too large for any single organisation.
By collaborating, government organisations can share the workload of data collection essential for developing a useful AI assistant. In the first phase, participants identified specific needs and began generating data from within their organisations to train the system. In the upcoming phase, they will gain access to shared databases of relevant national information to further inform the AI assistant.
Source: (AI Sweden, 2024[51]).
Another key initiative is open government data. On average, of only 46% of high-value government datasets are available as open data across the OECD (2023[52]), compared to more than 80% in France and Korea. Challenges remain in terms of fostering open data re-use by actors and the integration of these datasets into AI systems. Results from the 2023 edition of the Open, Useful and Re-usable data (OURdata) Index show that countries perform better in data availability and data accessibility compared to government support for data re-use (Figure 4.1).
Figure 4.1. OECD Open, Useful and Re-usable data (OURdata) Index, 2023
Copy link to Figure 4.1. OECD Open, Useful and Re-usable data (OURdata) Index, 2023A key question is how to increase the value of open government data for AI systems by design, and thus its accessibility and AI-readiness. On the one hand, the standardisation (e.g. in terms of structure and formats) of open government data can reduce the time AI-developers need to invest in preparing data to train AI-systems. On the other hand, the increased use of tools, such as APIs, can also support data integration with AI-systems by providing a standardised method for sharing and accessing data automatically, directly from its source. Today only 47% of high-value datasets are released with APIs (OECD, 2023[52]).
Beyond open data, other relevant initiatives include increasing access to large government-held or publicly funded datasets, such as Korea’s AI Hub Data Finder (2024[53]), which provides access to text, imagery, video, audio and sensor datasets relevant for AI-training in areas such as healthcare, public transportation and disaster and safety.
Using private sector data
Both public and private sector data can play a vital role in developing AI applications for government. While government data provides essential insights into demographics and public services, private sector data — such as on mobility patterns, consumer behaviour and financial trends — can enhance these insights. For example, AI systems for urban planning can benefit from telecommunications data to analyse traffic flow, while healthcare AI can leverage anonymised patient data from private clinics to improve disease prediction. By combining both sources responsibly and ethically, AI applications can become more accurate, efficient and responsive to public needs. One example of pooling and combining data from the public and private sectors is the Common European Data Spaces. The purpose of the data spaces is to make more data available for access and re-use across the European Union in a trustworthy and secure environment for the benefit of European businesses and citizens (OECD, 2024[54]; EC, 2025[55]).
Creating a conducive environment with data governance
From enabling data access and sharing to building the foundations necessary to make AI in government a possibility, governments need to develop robust “data governance” arrangements, which can be integrated into broader AI strategies and policies (OECD, 2024[3]).
Data governance refers to “diverse arrangements, including technical, policy, regulatory and institutional provisions, that affect data and their creation, collection, storage, use, protection, access, sharing and deletion, including across policy domains and organisational and national borders” (OECD, 2022[56]).
Originally developed to explore the specific arrangements that should be in place to enable data access and sharing, the OECD framework for data governance in the public sector can be applied to the context of data for AI-systems (Figure 4.2). Governments can develop data governance capabilities in the public sector by prioritising the development of comprehensive data strategies, defining leadership roles and establishing a vision for managing and governing data at a more technical level to realise AI’s intended benefits and outcomes.
Figure 4.2. Data governance in the public sector
Copy link to Figure 4.2. Data governance in the public sectorFor example, Canada’s 2023-2026 Data Strategy for the Federal Public Service outlines the desired outcomes and guiding principles to advance sound data governance across the federal government as a whole, along with expectations for roles and responsibilities.22 Other examples include the United Kingdom’s National Data Strategy and Australia’s Data and Digital Government Strategy.23 Greece (2024[47]) is pursuing a flagship programme on Data Governance and AI Strategy Coordination to establishing elements necessary to support an AI-ready public and private sector. The US policy discussed in Box 4.1 puts in place several requirements to improve data governance. It is also important to embed stakeholder engagement with data rights holders, such as with citizens, businesses and civil society representatives, who could be impacted by the use of data for AI — either in the context of intellectual property rights or personal data rights (OECD, 2022[58]), or by the potential use of inadequate or skewed data.
Governments should support their vision and strategy on data for AI with adequate capacity for consistent implementation across the public administration, along with guidelines and legal frameworks to ensure effectiveness (OECD, 2019[59]). This can include numerous areas, such as improving data literacy and skills for AI, with examples such as Brazil’s CAPACITA GOV.BR initiative24 and Argentina’s National Programme for Enhancing the Protection of Personal Data. This can also include efforts to boost coordination and institutional collaboration, with examples such as the Norwegian Resource Centre for Sharing and Use of Data (Digdir, 2024[60]). Finally, it is essential to recognise the enabling role of legal frameworks that orchestrate and accelerate the integration and exchange of data between public institutions, while safeguarding individual rights and privacy. Clear, modern and applicable legislation on data governance and personal data protection is important for deploying trustworthy AI systems at scale. In Chile, a draft Data Governance Bill is currently under discussion, which defines principles, roles and interinstitutional coordination mechanisms for data governance. In addition, a new Personal Data Protection Law, aligned with international standards, is entering into force. Other examples include Ireland’s Data Sharing and Governance Act 2019 and the EU General Data Protection Regulation (GDPR), Data Act, Data Governance Act, Open Data Directive and data interoperability frameworks.25
Finally, the delivery portion of data governance refers to the processes, mechanisms and tools that enable the operational implementation of data governance at the organisational and team level, ensuring that sound data governance and management practices are implemented and integrated across the AI data value cycle (OECD, 2019[59]). One example is the United States (2024[61]) assessment for Data Operations (DataOps) maturity across federal agencies as part of its AI Guide for Government. This framework evaluates how well organisations can discover, access and utilise data to support AI development throughout the data value lifecycle. Key components include securing a comprehensive data asset catalogue, flexible data access methods and tools that facilitate documented AI experiments. Other key components of delivering data governance are the technical skills and job profiles needed, including data scientists, domain experts, data engineers and data providers, who are involved in data collection and processing for AI (OECD, 2022[58]). Issues related to data infrastructure are covered in the next section.
Building out digital infrastructure
Ensuring the availability of reliable and scalable digital infrastructure can assist in supporting and scaling AI in government. In addition to data itself, as discussed in the previous section, data infrastructure, scalable computing platforms, AI foundation models and common AI tools are important building blocks for AI in government. Other forms of digital infrastructure also exist and are discussed in OECD work (2024[62]). This section focuses on those most relevant to AI.
In their attempts to purse trustworthy and scalable AI systems, governments are confronted with strategic decisions with regards to AI systems. On one hand, building national capacity through the development of national digital infrastructure can help a country in implementing its own data protection and privacy rules. A number of countries are seeking to develop such capacity (Letzing, 2024[63]; France Élysée, 2025[64]; African Union, 2024[65]; Ray, 2025[66]; EC, 2025[67]; Brizuela et al., 2025[68]). On the other hand, this could also contribute to technological fragmentation and closed ecosystem that limit international collaboration (Komaitis, Ponce de León and Thibaut, 2024[69]; Frazier, 2025[70]). Governments need to consider various options in determining a balance that is appropriate to them for developing solutions in-house versus in collaboration with the private sector and with other countries.
Digital infrastructure is not only relevant for national governments; it can also be formative in AI adoption in subnational governments, such as cities. The development of shared and reusable digital tools can help local governments overcome the entry costs due to the underlying economies of scale and allow AI solutions to be tailored to local needs and contexts.
Computing power and data infrastructure
Access to computing infrastructure resources can be key to the effective development and use of AI in government (OECD, 2022[71]). Choosing between on-premises and cloud solutions for AI deployment depends on specific needs, political choices, regulatory requirements, budget constraints and long-term goals.
On-premises solutions offer greater control, customisation and security, making them suitable for highly sensitive applications and to conform with data localisation laws (Redapt, 2023[72]). Cloud solutions, on the other hand, provide unparalleled scalability, cost efficiency and access to cutting-edge AI technologies, making them ideal for dynamic and rapidly evolving projects and more practical than on-premises solutions for small projects or newer or smaller entrants to AI development (Dombo, 2023[73]). More than half of OECD countries have cloud technology initiatives in place, including storage and computing capabilities (Infrastructure as a Service, or IaaS). Notably, access to cloud technologies relies on both public and private solutions (48% vs. 52% respectively), with several countries pursuing the development of public-sector-led cloud technologies (OECD, 2024[74]). In many cases, a hybrid approach combining both on-premises or otherwise dedicated infrastructure and public cloud (shared, third-party) resources (“hybrid cloud”) can offer a balanced solution, leveraging the strengths of each. Notably, work by RAND (2024[24]; 2025[25]) found that among companies, those who are able to use cloud solutions generally did not face challenges in securing adequate compute, but those who could not transfer data to the cloud faced significant challenges that contribute to AI project failure.
The global demand for AI-ready data centre capacity could triple by 2030, demonstrating the growing use of AI (McKinsey, 2024[75]). Carbon emissions produced by data centres and data transmission networks are already estimated to be 1% of all energy-related emissions, but have generally grown only modestly despite rapidly growing demand for digital services due in part to increased hardware and model efficiencies over time (OECD, 2022[71]; IEA, 2023[76]). Recent analysis from the International Energy Agency (IEA) (2025[77]) found that data centres are among the fastest growing sources of emissions and that such emissions could increase significantly in the next ten years, but also that “widespread adoption of existing AI applications could lead to emissions reductions that are far larger than emissions from data centres”. Nevertheless, the sizeable carbon emissions linked to data highlights the need for practices to manage AI’s energy requirements. Increasing amounts of water needed to cool data centres is also important to recognise (Metz et al., 2025[78]). Recent research and industry developments indicate a growing trend in AI towards the adoption of smaller and/or more specialised models, such as small language models (SLMs) that consume fewer resources, require less data and are less expensive (Hassani et al., 2022[79]; Jones, 2025[80]).
Box 4.5. Korea’s shared data centres and government cloud
Copy link to Box 4.5. Korea’s shared data centres and government cloudKorea’s National Information Resources Service (NIRS) has been working with the Ministry of the Interior and Safety to upgrade key hardware, networks and management tools to help modernise Korea’s technology and enable migration to the cloud. A critical part of this has been the construction of new government data centres, which can help ensure compliance with government requirements, cost-efficiencies with a reduced technology footprint, and job creation and local investment in target areas. These data centres have also been made available to the government’s main partners in the private sector, which helps ensure that companies holding or handling sensitive data are doing so in an environment that meets the government’s requirements for security, back-up and redundancy, among others. With measures around sustainability and renewable energy, the data centres help reduce the environmental impact of Korea’s digital government, particularly as it prepares to make greater use of AI solutions.
Source: OECD Digital Government Review of Korea (forthcoming).
In seeking a holistic approach, some countries are establishing compute and data infrastructure as part of a package of efforts, including approaches to digital public infrastructure (DPI),26 to support the secure and meaningful exchange of data across government, underpinned by strong data integration and analytics capabilities. Such infrastructures not only help scale AI use but can also foster interinstitutional collaboration and the generation of public value from data. Take, for example, Brazil's National Data Infrastructure (IND).27 This strategic initiative establishes a set of policies, standards, technologies and governance mechanisms to organise, share and manage public sector data securely and efficiently. Its main objective is to make government data findable, accessible, interoperable and reusable (FAIR principles), promoting transparency, the improvement of public services, administrative efficiency and evidence-based decision-making throughout government, serving as a foundation for digital transformation and innovation. The country’s gov.br platform serves as a central hub for integrating access to nearly 5 000 digital public services and includes a Conecta gov.br platform as a data interoperability layer across government. These platforms and other digital infrastructure are a foundation for AI in government. In another example, Saudi Arabia’s national cloud platform, Deem Cloud, developed by the Saudi Data and Artificial Intelligence Authority (SDAIA), consolidates digital infrastructure across more than 190 public entities and over 260 data centres (SDAIA, 2025[81]). It provides a suite of cloud services to support secure and efficient digital operations, and has contributed to energy and cost savings, as part of broader efforts to modernise public sector infrastructure and support national digital strategies. Focused more on compute, Greece (2025[82]) is building DAEDALUS, which is set to be one of the most powerful supercomputers in Europe and will be accessible to public institutions.
Developing AI foundation models
Foundation models are a form of AI models trained on large amounts of data — generally using self-supervision at scale — that can be adapted to a wide range of downstream tasks (OECD, 2024[83]). Governments can develop their own foundation models or build upon existing ones to create approaches tailored to the specific context of a country and/or its public. Foundation models can be “fine-tuned” through further training on narrower datasets related to a particular task or domain, enhancing its performance for that specific context (Montgomery, Rossi and New, 2023[84]).
Building a clean sheet foundation model is generally considered expensive and typically requires significant data and power resources. Examples of privately developed, proprietary foundation models include Mistral Large and those that power Anthropic’s Claude (e.g. Claude 3.7 Sonnet), Google Gemini (e.g. Gemini Ultra) and OpenAI’s ChatGPT (e.g. GPT-5).
Although they can be costly and require vast data assets and energy use, governments can indeed train and build their own foundation models, as discussed in Chapter 3. Governments can also fine-tune and tailor a proprietary foundation model to better meet its own context, which can significantly reduce the financial and time costs associated with deploying AI for specific tasks. An example of this approach is Portugal’s ChatGPT-enabled public virtual assistant for public services (Box 5.46).
Governments can also use “pre-trained” open-source models. These are foundation models that have been trained by a company or other organisation that made its “model architecture and weights freely and publicly accessible for anyone to modify, study, build on and use” (Seger et al., 2024[85]).28 Most open-source models are created by large technology companies, such as Meta’s Llama series, although more organic, open-source community-driven models have been developed, such as the BigScience Large Open-science Open-access Multilingual Language Model (BLOOM).29 An example of a government leveraging open-source AI models is France’s Albert virtual assistant for public servants (Box 5.46).
A foundation model that is tailored to a national and governmental context — such as through fine-tuning a proprietary model or customising an open-source model — can significantly reduce the cost of adoption for teams wanting to deploy AI. Yet development and use of foundation models comes with risks that governments should keep in mind, as discussed in Chapter 1.
Governments are increasingly demonstrating interest in investing in national or regional foundation models to enhance technological sovereignty and better reflect different languages and cultures. For instance, Latam-GPT, led by Chile, is being developed by over 30 Latin American institutions to create a model trained on regional data (Gob.cl, 2025[86]), OpenEuroLLM is an EU-funded initiative to build open-source models covering all official European languages (EC, 2025[87]), and context and language-tailored models have emerged in Southeast Asia (Noor and Kanitroj, 2025[88]). Similarly, Italy’s Minerva project has developed the first LLM trained from scratch for the Italian language and is one of the few examples in this context of tailored foundation models being developed from the ground-up (University of Rome Sapienza, 2024[89]).
Common AI tools
Common AI tools that can be used across an entire government, and tailored to specific governmental needs, can serve as a form of DPI that enables and enhances other services. Sometimes built with foundation models — sometimes provided as another type of DPI — these tools provide a shared service layer and can support the automation of routine tasks, improve user interaction and enhance service delivery.
For instance, chatbots can handle a large volume of citizen inquiries, providing instant responses to common questions and relinquish human resources for more complex tasks. This not only improves efficiency but helps ensure that public services are more accessible and responsive to the needs of the community. To be considered DPI, these AI tools should solve a common, basic need and thus be usable across a wide range of public sector organisations. An example of this approach is Singapore’s Virtual Intelligent Chat Assistant (VICA), provided as a shared service and used by more than 60 government agencies to create over 100 chatbots.30
Common tools for supporting AI do not always use AI. In other examples:
Singapore’s Whole of Government Application Analytics (WOGAA) has been developed as a government tool for monitoring the performance of government websites and digital services, including those enabled by AI, providing a central dashboard to track website traffic, automated reports with key metrics, benchmark performance compared to other government websites and more.
In Estonia, to implement the once-only principle, all public databases are mandatorily described in the catalogue of interoperability resources (RIHA), which serves as the national registry of systems, components, services, data models, semantic assets and more, guaranteeing the transparent, balanced and efficient management of public information systems.
France’s aforementioned ALLiaNCE AI incubator fosters the development of reusable AI products across administrations, aiming to mutualise efforts and reduce duplication. ALLiaNCE provides a multi-layered service offering, including AI-enabled tools embedded in France’s digital suite (La Suite Numérique), as well as a foundational layer — Albert API — to support cross-sectoral re-use of GenAI systems. As a digital common, Albert API provides open, reusable GenAI systems, lowering the adoption threshold for public agencies and contributing to a national DPI for AI. The ALLiaNCE incubator also prioritises data sovereignty and open, sovereign digital solutions, reflecting government efforts to mitigate dependency risks and ensure trust in AI systems.
The OECD.AI Catalogue of Tools & Metrics for Trustworthy AI includes a variety of tools from inside and outside governments that may help inform them in determining their own needs.31
Fostering skills and talent
As discussed in Chapter 3, skills gaps are one of the most significant challenges to the adoption of trustworthy AI in government. Governments should therefore take commensurate actions to build internal competency and capacity.
Governments should equip civil servants with the right skills to maximise AI’s effectiveness, while ensuring safe, secure and trustworthy use. An AI-ready public service is instrumental for the development and deployment of AI solutions, as well as for the effective use of AI-powered tools to enhance daily tasks and policymaking. A strategic and coordinated approach to AI skills and talent can help target different groups within the workforce, identify skills gaps, develop the right skills and attract and retain more specialised AI talent. Strong in-house skills can also contribute to building national capacity, a topic discussed in the previous section.
This may include recruiting individuals with the skills needed to work with AI, as well as the upskilling of existing roles. Governments will need to anticipate that as AI evolves, the necessary skills will also, calling for continuous learning. A solid approach will require a needs assessment to map the current level of capability for data and AI in the existing workforce, to identify the key gaps and inform the strategy to addresses these needs. This would mean being able to take informed decisions among recruitment, retention and development of digital talent. This could also inform tailored training programmes to address skills gaps, and an approach to manage and train (i.e. through skilling, upskilling or reskilling) the roles most affected by the integration of AI.
Assessing the needs of different user groups
Government institutions should assess the needs of different user groups to take up AI and use it effectively. An AI-ready workforce ranges from general users of AI systems to institutional leaders, data and digital professionals, and the more specialist roles. As illustrated in Figure 4.3, user groups become narrower and more specialised further down the pyramid.
Figure 4.3. Considering the level of AI literacy needed for different user groups in the workforce
Copy link to Figure 4.3. Considering the level of AI literacy needed for different user groups in the workforce
General, non-specialised public servants are critical to the adoption and effective use of AI. Their training should focus on general literacy around data and AI technologies, their effective use with respect to given tasks and the ethical and legal consideration for their use.
Leaders will be integral to the adoption of AI, being the layer where the technology meets the business of government. They help raise awareness among users, drive adoption of AI and promote learning and development opportunities in public administration. This user group needs a strategic vision and executive-level understanding of what AI technologies can do, their impact and how to address risks, compliance, funding and workforce management.
Data and digital professionals spearhead and facilitate the design, development and implementation of specific services. This user group needs a higher degree of AI literacy to comprehend how AI should be deployed to deliver the intended objectives for the services they are responsible for. Along with specialists, this group may be responsible for procuring AI through public procurement processes. Thus, the right set of skills can empower them in negotiations with vendors seeking to sell AI products and services.
While a smaller part of the workforce, AI specialists are critical for the development, deployment, management and use of AI systems. They extend beyond direct AI development, including roles in procurement, legal and project management. To develop this user group, governments need to target attraction, retention and learning and development. Additionally, to overcome potential skill shortages in the market, government organisations can also consider leveraging external capabilities through public procurement and partnerships, as discussed below.
Preparing civil service users
A civil service ready for AI will require a combination of foundational digital skills and more specific literacy in data and AI. The OECD Policy Framework for Digital Talent and Skills in the Public Sector (OECD, 2021[7]) outlines the various foundational digital skills applicable to every public servant, essential for supporting digital transformation:
understanding potential of digital transformation
understanding users and their needs
collaborating openly for iterative delivery
trustworthy use of data and technology
skills to enable a data-driven public sector
digital government socio-emotional skills
digital government leadership skills.
For an AI-ready workforce, it is necessary to build on these foundations with the literacy needed for “individuals to critically evaluate AI technologies, communicate and collaborate effectively with AI, and use AI as a tool online, at home, and in the workplace” (Long and Magerko, 2020[90]). This includes understanding AI systems, data handling and management, and ethics. One approach, for example, is the AI Skills for Business Competency Policy Framework developed by The Alan Turing Institute in the United Kingdom (2023[91]), which has five dimensions:
Privacy and stewardship: to mitigate risks around data security and protection, especially with regards to legal, regulatory and ethical considerations.
Specification, acquisition, engineering, architecture, storage and curation: for the handling and management of data to enable more effective and ethical use of AI systems.
Problem definition and communication: to identify, define and communicate those ‘problems’ that benefit most from the application of AI solutions.
Problem solving, analysis, modelling, visualisation: including range of tools and methods that can be used for analysis, AI application and communication.
Evaluation and reflection: to understand the impact of work on AI, assess efficiency and effectiveness of AI projects and identify opportunities to improve.
The level of competency and the extent to which each is required varies depending on the user group, existing capability or level of capability required for their roles. However, this combination of digital foundations and AI skills should lead to a more effective use of AI in government.
Developing skills and talent
To develop AI skills and talent in the public service, governments should consider both internal development practices, as well as the broader recruitment of key talent. Key mechanisms for developing existing talent include the items below, with additional examples provided in the OECD/UNESCO (2024[29]) G7 Toolkit for AI in the Public Sector.
Competency frameworks outline the key skills and learning pathways for the workforce, which should be tailored according to the needs assessments and user groups outlined above. This could also contribute to the professionalisation of key AI roles for government. For example, the EC’s Joint Research Centre (JRC) has developed a comprehensive competency framework (Box 4.6). India has also developed a dedicated competency framework to prepare public officials to lead AI transformation responsibly, tailoring content for different types of officials.32
Formal learning can include courses, workshops and online modules. For example, Ireland provides a number of relevant courses, including training on AI in the Public Service and on the country’s Guidelines for the Responsible Use of AI in the Public Service; and Greece’s Hellenic Ministry of Internal Affairs in collaboration with Google has created AI training courses for public servants.33 The globally available free and open course Elements of AI can help improve foundational AI literacy for public servants and citizens alike.34
Informal learning, including through communities of practice, mentoring or job rotations, among others, can help raise awareness and drive adoption. Such approaches are discussed further below.
Box 4.6. European Union competency framework for AI in government
Copy link to Box 4.6. European Union competency framework for AI in governmentThe EU has developed a comprehensive competency framework to guide public servants in effectively adopting and managing AI. This framework, elaborated by the Joint Research Centre (JRC), is based on empirical research, including literature reviews, expert workshops and case studies, and identifies key competencies necessary for AI integration in public administration.
The framework classifies competencies into three main dimensions:
technical competencies: encompassing knowledge and skills related to data management, machine learning and AI system implementation
managerial competencies: addressing project ownership, knowledge brokering and decision-making in AI-related initiatives
policy, legal and ethical competencies: covering AI procurement literacy, auditing and collaboration with domain experts to ensure compliance and ethical considerations.
Additionally, competencies are grouped into three cross-cutting clusters related to (i) attitudinal competencies (know-why), referring to mindsets and dispositions that support AI adoption, such as technology inquisitiveness and a data-oriented culture; (ii) operational competencies (know-how), covering practical skills for AI implementation, including database management, algorithm training and decision-making processes; and (iii) literacy competencies (know-what), related to fact-based knowledge of AI concepts, regulatory frameworks and fundamental machine learning principles.
Beyond developing the framework, the JRC’s work gives three key recommendations on competencies:
1. develop focused, interdisciplinary AI competence training programmes
2. promote applied interdisciplinary research on AI competences
3. establish dedicated hiring processes and devote additional resources to attracting specialists with AI competences.
Developing AI-related skills internally should be complemented by external strategies to attract top talent and retain skilled public servants. The European Union’s competency framework for AI in government highlights the need for a structured approach to AI skills development, emphasising technical, managerial and policy-related competencies. Canada’s Digital Talent Strategy aligns with these principles, recognising that AI adoption in the public sector requires a balance of attitudinal (know-why), operational (know-how) and literacy (know-what) competencies to build an AI-ready workforce.35
To address critical skill gaps, targeted recruitment efforts and dedicated hiring processes are essential. The US Office of Personnel Management’s (OPM) Skills-Based Hiring Guidance and Competency Model for AI offers a structured approach to defining and assessing AI job classifications, 36 while the EU framework underscores the importance of interdisciplinary training programmes and applied research. Governments should also focus on competitive compensation, clear career pathways and workplace flexibility to attract and retain AI talent.
Where internal capacity remains limited, partnerships with industry and academia, as well as strategic procurement of external expertise, can provide necessary support. The sections below on public procurement and AI partnerships explore these strategies further, offering policy examples. Additionally, continuous assessment of workforce alignment, progress in closing AI skill gaps and the effectiveness of learning initiatives is critical. By integrating competency-based insights from global best practices, governments can ensure their digital workforce evolves alongside technological advancements, ethical considerations and shifting labour market demands.
Facilitating connections and knowledge exchange
Communities of practice and networks allow for collaboration, learning, the sharing of expertise across organisational boundaries and the identification of collective or common problems. They can also serve as a useful conduit for soliciting user feedback on internal AI systems and services. The EC JRC competency framework (Box 4.6) notes that such activities can be critical for helping public servants gain know-how on AI and to help overcome challenges with early AI adoption. This framework outlines three action points for developing communities of practice: 1) create associations with relevant stakeholders, 2) deploy digital platforms for the communication and collaboration of involved entities, and 3) finance synergy grants for public-private collaboration and knowledge exchange, which can help reduce knowledge asymmetries and spark joint ventures.
Disseminating successful methods, strategies and use cases through such methods can help government organisations replicate and scale AI projects more effectively (OECD/UNESCO, 2024[29]). This approach helps avoid common mistakes, helps ensure consistency and accelerates the adoption of AI technologies across various government entities. For example:
Estonia’s skills development programme includes a data expert network with 500+ participants, AI meetups and experimentation events (e.g. hackathons, competitions) (OECD/UNESCO, 2024[29]).
Chile’s Network of Public Innovators connects over 30 000 public servants from all levels of government and other relevant actors for collective learning, creation and experimentation, including on AI.37
Canada’s Data Conference serves as the primary forum for public servants and data leaders to enhance awareness, share knowledge and advance data applications throughout the Canadian government.38 Additionally, department-led working groups on AI topics enable public servants across various departments to share experiences and insights, fostering collaboration and innovation in AI implementation (OECD/UNESCO, 2024[29]).
The International Smart Cities Network, led by Germany, promotes international exchange and knowledge transfer at national and local level by serving as a place for international dialogue and sharing of ideas and best practices.39
France established ALLiaNCE and Communauté des labos; informal inter-ministerial groups for sharing AI best practices.40
In Switzerland, the Competence Network for AI has promoted communities of practice to understand common challenges in the implementation of AI systems, including in public administration.41
Such communities and networks need not be specifically focused on AI; in fact, more general groups can help surface a broader base of relevant issues and better consider alternative approaches. However, governments may want to develop additional AI-focused communities and networks or to ensure that general communities and networks include individuals with AI expertise in order to help identify links between problems and AI approaches that may constitute an optimal solution.
Bringing together multi-disciplinary skillsets and perspectives
Digital and AI skills are not the only ones relevant to designing and using AI in government. Some governments have sought to create one or more multidisciplinary teams to ensure AI initiatives benefit from diverse perspectives and expertise. The sensitivity and complexity around AI require the involvement of experts from a range of disciplines, including technology, ethics, law and public policy, to set a strategic approach to the use of AI. Such teams can provide a diversity of perspectives and expertise, thereby facilitating the identification of potential risks, and ensuring comprehensive and inclusive AI use across public administration (Berryhill et al., 2019[4]).
Investing purposefully
Governments are increasingly investing in AI by funding government AI initiatives. Estimates indicate that governments may increase their annual spending on AI-related technologies by 19% in 2025 and continue increasing thereafter (Gartner, 2024[92]). It is essential governments strategically plan, implement and monitor AI investments in government to ensure value for money, identify and mitigate potential investment risks, implement and deploy technologies in a timely manner and evaluate whether intended benefits are realised (OECD, 2025[93]).
The 2023 OECD DGI (2024[74]) shows countries have not yet developed robust capabilities for managing digital investments in the public sector. While 88% of OECD countries have a standardised approach to developing value propositions, only 41% have developed a risk assessment mechanism for digital government investments, including operational (e.g. cybersecurity, service access disruption or related to use of legacy technologies) and financial (e.g. ROI uncertainty, sustainability of funding or overhead and maintenance) risks. To advance towards trustworthy AI investments in government, countries can develop assurance mechanisms, including:
strengthening strategic planning
supporting coherent investments across government
reinforcing investment monitoring mechanisms.
A whole-of government coordination across these three domains will allow governments to invest in AI systems capable of delivering governments’ policy objectives on time and on budget. The following sub-sections review these domains.
Strengthening strategic planning for coherent investment
Governments should coordinate with key stakeholders to plan and manage AI investments based on clear principles. Establishing such principles can help to ensure that investment decisions are consistent with overarching strategic objectives. For instance, articulating a commitment to developing more proactive services could result in increased funding and management support for the implementation of AI chatbots for government-citizen interaction. Moreover, coordination among budgeting, digital government and public procurement authorities can help identify AI needs and align them with available resources and potential acquisitions from and partnerships with the private sector. Germany has sought to achieve this through a 2024 AI mission statement that is complemented by a new Centre for AI in Public Administration (BeKI) as an initiating and coordinating body for AI investments and guidance in the federal administration (OECD, 2024[94]).42
Governments can also use existing management tools, such as value proposition and investment risks assessment mechanisms to reinforce assurance and secure coherence in investment decisions on AI systems. Adapting value proposition mechanisms to the specificities of AI systems allows governments to strengthen assurance processes for trustworthy development and use of AI. This includes the assessment of key AI aspects such as compliance with regulation and policy standards. The value proposition can include a risk and impact assessment that measures and evaluates the benefits and potential risks of AI systems in government, as well as the plans to comply with regulations. For example, Australia has released a Pilot AI Assurance Framework to guide agencies in aligning AI use cases to Australia’s AI Ethics Principles, identify impacts and risks, and apply mitigations.43
Funding AI and supporting coherent investments across government
Although often overlooked in national AI strategies, funding and financing mechanisms are an important consideration for government applications of AI (van Noordt, Medaglia and Tangi, 2023[95]). Even simple initiatives need some level of funding and financial support to make their way from idea to reality, with significantly more funding needed to scale-up a successful project. The availability and nature of this financing can contribute greatly to the eventual success of AI-based innovation. Conversely, a lack of funding for AI development and implementation is a top barrier to government AI adoption (EC, 2024[96]; UK NAO, 2024[16]). Targeted financial resources can support AI experimentation and scaling, as well help in reducing fragmented efforts and uneven adoption of AI. The European Commission (EC) (2024[96]) has recently noted this in a 2024 study on strategic AI adoption for public services, recommending governments increase funding and resources for their use of AI. Examples of specific funding vehicles include:
The US Technology Modernization Fund (TMF) opened a special call for AI investments to support public agencies as part of a broader funding mechanisms designed to replace outdated legacy IT.44
In France, the Fund for the Transformation of Public Action offers financial support for public sector institutions seeking to improve policies and services with AI, in particular for project proposals with potential for scalability and replicability across the public administration.45
The United Kingdom is investing GBP 110 million to accelerate the AI use in the government, including adding capacity to its Incubator for AI (i.AI), and it has committed to providing GBP 10 million to boost regulators’ AI capabilities (Cover-Kus, 2024[97]; UK House of Commons, 2024[98]).
In Poland, government departments have been asked to set aside a percentage of their budget for AI procurements (van Noordt, Medaglia and Tangi, 2023[95]).
Monitoring mechanisms for coherent investment
Ensuring the on-budget and on-schedule development and deployment of AI investments contributes to the realisation of benefits and delivery of intended results. In line with general investments on digital technologies, governments can leverage monitoring tools to oversee the management and development of AI systems across the administration. These activities should consider developing key performance indicators (KPIs) and structured approaches to manage ongoing developments on AI systems through IT portfolio management. Such management tools can enable or complement guardrails for AI development focused on trustworthy development, deployment and use of AI in government. These approaches can consider quality control mechanisms, linking them with the planning and monitoring of digital investments to secure coherency across the AI system lifecycle. Countries have developed guidance to embed monitoring and measurement actions in the investment cycle of AI initiatives. For example, France uses a monitoring tool to track major digital state projects, including AI initiatives, costing over EUR 9 million.46 It lists strategic IT projects and helps identify actions for success. The tool monitors project distribution by ministry, progress phase, functional area and estimated cost (OECD/UNESCO, 2024[29]). However, most countries still face the challenge of deploying continuous or ad-hoc monitoring practices. Recognising the need to develop specific capabilities and plans for AI policy monitoring, Norway’s Office of the Auditor General (OAG) started auditing the AI use in the central government as part of its pipeline of new performance audits since 2023. At the executive level, the country is taking steps to strengthen the monitoring and oversight of the portfolio of government AI projects through regular internal audits, performance monitoring and impact assessments (OECD, 2024[99]).
Using public procurement to obtain AI products and services and guide the market
Fit-for-purpose public procurement processes and mechanisms are key to enabling agile and cost-effective access to AI systems developed by third parties, ranging from large companies to start-ups and entrepreneurs. Beyond simply purchasing solutions or contracting resources, procurement serves a key strategic approach, which uses purchasing as a bridge to connect public missions and objectives with societal needs and values. To fulfil this role effectively, procurement officials should conduct a comprehensive evaluation of AI’s consistency with internal objectives, adherence to fairness and transparency standards, resource efficiency, risk mitigation (e.g. biases or security vulnerabilities), engagement with stakeholders and impacted groups, and compliance with relevant legal and regulatory frameworks.
In promoting the effective deployment of AI technologies, public entities could consider adopting procurement mechanisms that foster agility, iteration and innovation. The process needs to start with careful preparation and planning to achieve flexible and efficient procurement processes that encourage broad participation which is open and accessible to all (UK DSIT, 2020[27]). This preparatory phase should include:
the establishment of a multidisciplinary team to support the procurement of the AI systems
an assessment of current data and governance approaches to evaluate readiness and existing capabilities and resources for the effective training and use of AI systems, as relevant
an assessment of potential risks throughout AI lifecycle and the identification of associated mitigation strategies.
Agile and innovative procurement methods
Agile and innovative procurement methods provide opportunities to accelerate the adoption of new technologies within governments and on the trustworthy development and use of AI (Monteiro, Hlacs and Boéchat, 2024[100]). These can include technology contests, demonstrations, challenge-based procurement processes and competitive dialogues (UK DSIT, 2020[27]). In addition, policy framework agreements that set overarching procurement rules, priorities and guidelines — either specifically for AI or with key vendors — can play a part in conducting business with the private sector, including for countries with a less diverse set of procurement practices available to them. In Australia, for example, the government used an existing agreement with a multinational technology company to deploy a widescale pilot of an AI solution across its public administration (Australia DTA, 2024[101]). Policy framework agreements can also take the form of predefined environments enabling AI procurement under broader guiding principles. For instance, the EC (2024[96]) Adopt AI programme aims to modernise public procurement for AI systems by fostering dialogue between public procurers and Europe’s AI industry. It promotes mutual understanding, drives industry investment and seeks to create a public procurement data space. Sectoral dialogues bridge the gap between procurers seeking solutions and suppliers needing insight into public administration plans (OECD/UNESCO, 2024[29]).
Procurement as a lever for public good and trustworthy AI
Public procurement can be a strategic tool to shape the market and ensure AI systems align with government standards. It also plays a critical role in setting requirements for AI systems that reflect public values, ensuring accountability, security and fairness in AI adoption. For example, the EC set up model contractual clauses to pilot procurements of AI in 2023, which were updated in 2025 to align with requirements of the EU AI Act and provide comprehensive guidelines for high-risk applications and customisable options for non-high-risk AI (2023[102]; 2025[103]).47 Australia, too, has established model clauses.48 At the sub-national level, the City of Barcelona, Spain, has introduced procurement clauses emphasising data sovereignty, ensuring that data collected from the public, even by private companies, remains publicly owned (Berryhill et al., 2019[4]). Public procurement guidelines are policy instruments that can influence AI use globally. For instance, ChileCompra, Chile’s public procurement agency, has introduced a new tool to ensure procured AI systems are responsible and ethical (see Box 5.24).49 Internationally, initiatives such as the World Economic Forum’s (WEF) (2025[104]) "AI Procurement in a Box" provide structured guidance to help governments integrate AI procurement best practices and align AI acquisitions with ethical and regulatory frameworks. In April 2025, the United States issued new AI acquisitions policy that seems to promote agile procurement and the removal of unnecessary bureaucracy and outdated procurement processes (Box 4.7).
Box 4.7. Driving efficient AI acquisitions in the United States
Copy link to Box 4.7. Driving efficient AI acquisitions in the United StatesIn the United States, the White House Office of Management and Budget (OMB) issued M-25-22 Driving Efficient Acquisition of Artificial Intelligence in Government on 3 April 2025. It includes a variety of requirements and recommendations for federal agencies regarding AI acquisitions across six procurement phases. A non-exhaustive list of these include:
1. Identification of requirements: convene a cross-functional team to inform the procurement of AI systems and assist in creating an initial list of potential risks to be evaluated. As practicable, consider which uses may be “high-impact AI” (Box 1.3).
2. Market research and planning: seek state-of-the-art AI capabilities by conducting thorough market research, including through interagency knowledge sharing and considering novel capabilities from new entrants. Seek detailed demonstrations and tests of potentially useful AI to assess providers and identify obstacles to long-term cost effectiveness. Use performance-based techniques to identify requirements and contract terms to understand and assess vendor claims.
3. Solicitation development: include in solicitations requirements that protect against vendor lock-in and terms related to IP rights and lawful use of government data, and when practicable, agencies must be transparent regarding whether the AI use could be considered “high-impact” and what this could mean to the vendor.
4. Selection and award: test proposed solutions to understand their capabilities and limitations. Separately, agencies must evaluate proposals to identify any potential new AI-related risks that were not previously identified. Address in contract terms, where applicable, IP rights and the use of government data, privacy, vendor lock-in protections, compliance requirements for the policy discussed in Box 4.1, ongoing testing and monitoring, vendor performance requirements
5. Contract administration: help ensure AI systems are authorised by an appropriate official prior to deployment, put in place contract oversight and monitoring processes for contract performance and to identify and mitigate emerging risks. Arrange for periodic evaluation of the AI system or service's value to the government, considering, as practicable, effectiveness, efficiency, risks, operations and maintenance costs and stakeholder feedback. Consider sunset criteria.
6. Contract close-out: help ensure vendor lock-in protection, such as ensuring ongoing rights and access to any data or derived products.
To assist agencies, a centre of government agency will release publicly available guide(s) to assist the acquisitions workforce with the procurement of AI systems, create a digital repository available to public servants to facilitate the sharing of information, knowledge and resources about AI acquisitions (e.g. best practices, tools, language for contract clauses, negotiated costs).
Source: (US OMB, 2025[105]).
Procurement to shape the market
Beyond ensuring governments procure trustworthy AI for their own use, public procurement can act as a powerful lever to influence more general market dynamics, driving innovation and aligning AI system development with principles for trustworthy AI. Public procurement represents about 13% of GDP in OECD countries (OECD, 2024[106]). By approaching procurement strategically, governments can use the “economic weight of the government’s purchasing power” to foster the development of AI solutions that not only meet their needs but also promote broader alignment with ethical and regulatory standards (World Bank, 2025[107]).
Expanding AI’s potential through partnerships
Governments can benefit greatly from ongoing, active cross-sector partnerships in which each sector has a concrete role and contributions (OECD/CAF, 2022[30]). These partnerships can facilitate collaboration among public entities and AI specialists in other sectors, including private sector companies and non-governmental actors (e.g. academic institutions, foundations), promoting the development and implementation of cutting-edge solutions. Public-private partnerships (PPPs) are perhaps the most common type of arrangement. Some examples here include:
The European Union’s InvestAI initiative seeks to mobilise EUR 200 billion in investment in AI through a PPP akin to a CERN for AI to enable to development of leading-edge AI systems across sectors.50
As announced at France’s February 2025 AI Action Summit, 10 countries are developing a Public Interest AI Platform and Incubator to support, amplify, decrease fragmentation between existing public and private initiatives on public interest AI, and address digital divides. It will support digital public goods, technical assistance and capacity building to foster a trustworthy AI ecosystem for public interest AI.51
Chile’s Data Observatory (DO) is a non-profit organisation jointly led by government, industry and academia. It serves as is a technological centre that, through the management of large volumes of data, seeks to contribute to social well-being by promoting sustainable development of the country, contributing to the generation of enabling factors for optimal development of AI and promoting the creation of public policies and strategic decision-making based on evidence.52
Portugal’s ChatGPT-driven virtual assistant for public services (see Box 5.46) was developed through a PPP with the government and several companies.
Latvia’s new Artificial Intelligence Centre is a private foundation co-founded by government, academia and industry, designed to promote the trustworthy and sustainable adoption of AI across sectors, including a specific focus on the integration of AI in public administration, and to ensure the incorporation of Latvian language and culture in AI systems.53
Turning to GovTech startups
Bridging the concepts of public procurement and partnerships, GovTech is the collaboration between government and start-ups, innovators, government “intrapreneurs” and academia on innovative digital government solutions. It complements existing government capability for agile, user-centric, responsive and cost-effective processes and services (OECD, 2024[108]). It aims to contribute to an agile government and enhance digital government maturity. Not only does this help improve effectiveness and efficiency, but it can also encourage the participation of start-ups and newer providers in the government market. GovTech innovation is characterised by co-creation and experimentation. These collaborative interactions aim to transcend traditional supplier-contractor relationships to build new forms of partnerships. Rather than focusing on the detailed terms of reference and technical specifications, GovTech’s focus is instead on the solution’s expected outcomes and on involving GovTech actors in building it. While many such collaborations leverage public procurement (as discussed above), they can also use grants and monetary prizes to incentivise the creation of innovative solutions (e.g. through demo days or incubator programmes). The OECD has developed a GovTech Policy Framework to outline the factors important for maximising GovTech engagements (Figure 4.4).
Figure 4.4. OECD GovTech Policy Framework
Copy link to Figure 4.4. OECD GovTech Policy FrameworkGovernments can leverage GovTech collaborations to experiment with and develop AI systems to address governmental and societal challenges. The 2023 OECD Digital Government Index (DGI) shows that 42% of the 33 OECD countries surveyed are setting GovTech objectives to facilitate the testing and adoption of emerging technologies, including AI. For example, Spain (2024[109]) is using its GobTech Lab to develop AI pilots. A recent EC report (2024[110]) titled GovTech: influencing factors, common requirements and recommendations provides several other use cases and findings.
Establishing guardrails to guide strategic and responsible AI
Copy link to Establishing guardrails to guide strategic and responsible AIGuardrails help to ensure the trustworthy deployment, development and use of AI in government. They can be binding and non-binding policy levers, transparency processes and accountability mechanisms, such as monitoring and oversight bodies. Guardrails are essential for managing the risks associated with AI and deploying AI according to legal boundaries and social values. This ultimately helps to build public trust in government. The sections below review key guardrails, together with available policy options that governments can consider implementing in their own contexts, drawing on examples of international good practices.
It is important to note, however, that these guardrails should be seen in conjunction with the enablers above. Guardrails and their interpretation can be a leading driver in risk aversion, which contributes to risks of inaction and missed opportunities (see Chapter 1). Governments should also consider the importance of eliminating or revising guardrails that no longer serve their purpose or cause negative consequences that do not outweigh their utility.
Finally, this section does not seek to suggest that governments should necessarily put in place all guardrails discussed in this section, nor that they should apply to all uses of AI. That, too, could contribute to risk aversion and hinder the trustworthy adoption of AI in government. Instead, governments should determine which guardrails fit their operations and contexts and apply them to AI uses in a risk-based way commensurate and proportionate to their potential level of risk.
Using policy levers to guide trustworthy AI
Policy levers to promote trustworthy AI can include non-binding instruments such as guidance documents, ethical frameworks, technical standards and risk-management frameworks; and binding instruments, such as laws and regulations. These policy levers work together and alongside other guardrails to protect human rights. They do so through risk management approaches and by fostering the responsible development and deployment of AI. They help mitigate AI misuse, skewed outcomes, privacy infringements and unintended consequences, while also offering practical tools for implementing governance principles and ensuring consistent performance across AI use (UNESCO, 2024[111]). Generally, developing policy levers for AI in government should follow a set of good practices to ensure they effectively promote trustworthy AI. The policy levers should:
Align with ethical principles and societal values to ensure that technology serves the common good. This is important for maintaining public trust, safeguarding the free exercise of human rights and ensuring that AI systems operate fairly and responsibly.
Take into account both innovation and risk management, helping government organisations navigate the evolving AI landscape responsibly by harnessing AI's transformative potential while addressing risks and challenges (OECD, 2023[35]; 2021[112]).
Continuously assess and identify potential risks associated with AI systems. Risk assessment tools offer a balanced approach, where both responsible use and cutting-edge advancements can coexist, ensuring that AI benefits society while meeting ethical standards (UNESCO, 2023[113]).
Engage stakeholders through different institutional arrangements to align AI use with the needs and values of those it impacts (see Engagement section of this chapter).
Non-binding policy levers
One of the most common entry points for promoting trustworthy AI is the adoption and/or development of principles or ethical frameworks. These instruments establish a set of values and best practices to guide AI that is used transparently, fairly and responsibly. Around the world, over 200 such instruments have been developed (Corrêa et al., 2023[114]), and often times, such principles are embedded in a country’s national AI strategy. They also often address a wide range of ethical concerns surrounding AI, such as bias, transparency, accountability and the impact of AI on society.
Important progress has been made in global governance for AI, with several instruments developed by intergovernmental and supranational organisations to standardise and unify AI development on a global scale. These include the OECD AI Principles (see Table 2.2 in Chapter 2), the EC’s Ethics Guidelines for Trustworthy AI, and the G7 Hiroshima AI Process International Guiding Principles for Advanced AI Systems.54 The African Union (AU) is also working on its own charter on trustworthy AI (OECD.AI, 2025[115]). Other non-binding international efforts include the UN General Assembly’s 2024 resolution on the promotion of “safe, secure and trustworthy,” which was backed by more than 120 countries,55 as well as declarations from international AI summits, which have been held in the UK, Korea and France.56 At the national level, governments also establishing their own ethical frameworks. For example:
Australia has the “Artificial Intelligence Ethics Principles”, which are designed to prompt organisations to consider the impact of using AI enabled systems and help businesses and governments to practice the highest ethical standards when designing, developing and implementing AI. Building upon this framework, Australia’s Voluntary AI Safety Standard (VAISS) gives practical guidance to all Australian organisations on how to safely and responsibly use and innovate with AI.57
In Colombia, the “Ethical Framework for Artificial Intelligence” offers a set of principles and a methodology that should be considered in the design, development and deployment of AI systems (OECD, 2024[3]).58
Egypt has developed an “Egyptian Charter for Responsible AI” shaped around key principles.59
Governments are increasingly developing guidance documents for using AI. These are comparable to the guidance discussed above that serve as enablers, but with more of a focus on establishing the parameters for trustworthy use of AI. Usually geared towards public officials responsible for developing AI projects and managing extensive data collection and analysis, these documents are intended to equip public officials with knowledge needed to ethically shape AI projects and to raise awareness about potential risks, including, but not limited to, breaches of personal data (OECD/UNESCO, 2024[29]). Such guidance is more concrete than principles, often addressing technical aspects that may affect AI deployment. For example:
In the UK, the government and The Alan Turing Institute jointly developed a guide to “Understanding AI Ethics and Safety” (2019[116]). In Canada, the “Guide on the Use of Generative AI” serves as a resource for federal institutions utilising generative AI technologies (OECD/UNESCO, 2024[29]).60
Germany has two primary sets of guidelines to ensure the ethical use of AI in public services: Guidelines for the Use of AI in Employment and Social Protection Services; and the AI Guidelines for Federal Administration (OECD/UNESCO, 2024[29]; Policy Lab Digital, Work & Society within the German Federal Ministry of Labour and Social Affairs, 2024[117]). The latter has already been published, while the former is under development.61
It is important to note that non-binding measures are limited in what they can achieve. For instance, when it comes to AI in the workplace, most OECD countries’ AI-specific measures to promote trustworthy AI in the workplace are primarily non-binding and rely on organisations’ capacity to self-regulate (i.e. soft law) (OECD, 2023[118]). Because of its non-binding nature, soft law may not be enough to prevent or remedy AI-related harm in the workplace. Government should also consider binding measures to overcome this limitation in important areas.
Binding policy levers
To date, most binding measures for AI in government have been put in place at national and sub-national levels, as discussed below. However, some international mechanisms have recently come into action. Perhaps the most notable example is the EU AI Act regulation (Box 1.2). More recently, the Council of Europe (CoE) Framework Convention on AI and Human Rights, Democracy and the Rule of Law (2024[119]) was passed as the first international legally binding treaty on AI. Opened for signature in September 2024, it applies to both the public and private sectors. It aims to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, while being conducive to technological progress and innovation. As of September 2025, the treaty has 17 signatories, including the European Union and the United Kingdom, as well as non-European countries, such as Canada, Japan and the United States.
National AI laws and regulations can govern activities throughout the AI lifecycle and address issues like data protection, privacy, misuse and other concerns. Such rules help ensure that AI development aligns with societal values and legal norms by providing developers with clear guidance for compliance, establishing boundaries through binding standards for transparency, accountability and fairness (OECD, 2025[120]). They can also define responsibility for AI outcomes and promote consistency and cooperation across jurisdictions by harmonising standards. Ultimately, adequate and well-fitting laws and regulations help promote innovation while instilling necessary protections, ensuring that AI serves the public interest. These binding levers can be general AI governance laws that affect all sectors, AI-specific laws focused on government use, and cross-cutting laws related to but not specifically targeting AI.
However, governments need to keep in mind the dynamic nature of AI in the present and its many potential trajectories in the future. As discussed in Chapter 3, public servants face challenges with confusing or outdated rules that hinder their ability to adopt AI. In developing national binding instruments, they should seek alignment with the OECD (2021[121]) Recommendation Agile Regulatory Governance to Harness Innovation. The United Kingdom has sought to achieve this through its “pro-innovation approach to AI regulation”.62
General AI governance laws are broad rules that govern AI systems across various economic sectors, policy domains and regions. They focus on establishing frameworks for AI risk management, ethical use and societal impact. While many countries have laws that may influence AI (e.g. data protection laws), few have formal laws specifically on AI. Existing or proposed national laws and regulations include:
Korea’s Basic Act on the Development of AI and the Establishment of Foundation for Trustworthiness, or “AI Basic Act” (2024[122]), which will come into effect in January 2026, establishes a comprehensive framework to promote AI innovation while ensuring ethical standards, safety and public trust for all organisations using AI in the Korean market.
Bahrain and Oman put forward draft AI legislation in 2024, with Oman holding a public consultation.63
Governments in Latin America (including Argentina, Brazil, Chile, Colombia, Costa Rica, Ecuador, Mexico, Panama, Peru and Uruguay) are discussing general AI legislation (UNESCO, 2024[111]).
Specific laws and regulations can also be developed to govern the use of AI systems in government. These often focus on transparency, accountability, AI governance in public organisations, ethical considerations and responsible AI use in functions and services. In the United States, at the sub-national level, states have taken steps in this direction. For instance, New York introduced the LOADinG Act in 2024 to limit the use of automated decision-making systems by state agencies and provide some protections for public servants as related to AI (Werner, 2024[123]). Delaware has strengthened oversight of AI and generative AI in the state (2024[124]). However, the adoption of dedicated AI laws or regulations for government use remains limited, with most frameworks emerging as part of broader AI governance efforts. Besides hard laws and regulations, formal policy guidance can also provide binding rules for government organisations. For example, the 2025 US policy on “Accelerating Federal Use of AI through Innovation, Governance and Public Trust” covers a range of issues and measures to ensure trustworthy AI in government while removing barriers to innovation (Box 4.1).64
Cross-cutting laws are broader laws that, while not exclusively for AI, have significant implications for its deployment and use in government. These might include regulations on data protection, privacy, cyber risks and human rights, which shape how AI systems can be used in public sector contexts. For instance, data protection and privacy law safeguard personal information by setting rules on how AI systems collect, store and process data, ensuring individuals’ privacy rights are upheld. While not typically designed specifically for AI, they have significant implications for its use. For example, the EU GDPR outlines specific requirements for managing personal data for all sectors, covering aspects such as data collection, storage, processing and the rights of individuals.
Promoting transparency in how government uses AI
To be transparent about AI use, governments should, to the extent practicable and appropriate, make algorithms open, understandable and accessible to public scrutiny, as well as disclose the processes and decisions AI systems contribute to. This means governments should provide clear, context-appropriate information about how its AI systems work, the data they use and how they reach conclusions and outputs, and mechanisms to challenge outcomes. Such transparency allows stakeholders to access information, make informed decisions, and if necessary, seek redress for potential harms. Designing a coherent, sustainable and impactful approach on AI algorithmic transparency involves disclosing key information about AI systems and algorithms used. This includes complementary assets such as training data; engaging with a broad and diverse set of stakeholders to ensure their needs and concerns are considered (see the “Engagement” section of this chapter); promoting AI literacy to empower communities to have an informed voice in the issues that affect them; and strengthening existing national rules on transparency and accountability to effectively address the challenges and risks posed by AI.
Transparency not only promotes trust and enhances public value but also underpins government accountability. As covered by the OECD AI Principles, transparency and accountability are two different but complementary concepts. Transparency enables more informed oversight, fosters trust and increases the accountability of those who develop or control these systems. Transparency also enhances the fairness of AI-driven systems and helps ensure that their implementation can be effectively monitored and evaluated, particularly when decisions have a direct impact on people's lives in areas such as healthcare, finance and criminal justice.
Countries should employ a variety of transparency policies, tools, methods and approaches that are suited to their audience and provide clear information. These instruments can be broadly classified as either proactive or reactive (GPAI, 2024[125]).
Proactive transparency instruments
Overall, “the public sector’s understanding of its own AI usage is severely lacking, which hinders both democratic accountability and internal knowledge sharing” (Ada Lovelace Institute, 2025[23]). Instruments are needed that allow governments to understand its own AI use, and by extension, enable them to proactively share information about the AI systems used in the public administration without being prompted by requests (GPAI, 2024[125]). Policy options for proactive transparency include public registries of AI systems, publishing algorithm source code and documentation, user-driven proactive publications and automated responses triggered by interactions.
Public registries of AI systems are increasingly common, serving as centralised repositories that consolidate information about AI systems currently used in government. The goal is to create a "one-stop shop" where citizens and stakeholders can easily access information about the AI systems in use, their purposes, the sectors they apply to and the jurisdictions they affect. Examples include:
Colombia’s dataset on automated decision systems in the Colombian public administration65
the United Kingdom’s Algorithmic Transparency Records66
the US government’s AI use case inventory, which federal agencies are required to update at least annually (US OMB, 2025[17])67
national government public algorithm inventories in Chile, France and the Netherlands Public Algorithms Inventory68
sub-national algorithm registers Amsterdam, the Netherlands, and Helsinki, Finland.69
Developing a central, public and searchable registry of AI systems is a best practice that enhances transparency. Yet doing so can be challenging, in part due to how rapidly governments are deploying AI systems in a variety of domains. In Chile, challenges in making AI uses cases transparent prompted the Chilean Transparency Council, an independent body created by law, to issue recommendations on improving algorithmic transparency in government (2024[126]). In the Netherlands, only about 5% of AI systems have been published in the Dutch AI registry, as of October 2024 (Netherlands Court of Audit[127]). In the United Kingdom, the Public Accounts Committee (PAC) (2025[128]) found that relatively few (33) Algorithmic Transparency Records had been published, jeopardising public trust in the adoption of AI in government. However, such registries could be populated automatically, depending on if and how impact and risk assessments are carried out in a government (OECD, 2024[94]). For instance, Canada’s Directive on Automated Decision-Making requires the completion of an Algorithmic Impact Assessment (AIA) for automated decision systems. The AIA results must be published as open information on Canada’s Open Government Portal. If required for all AI systems, such a process could automatically populate a public registry.
Publishing algorithm source code and documentation also promotes transparency. Open sourcing the code for public algorithms is regarded as a best practice in algorithmic transparency and is especially valuable for technical and expert audiences (Ada Lovelace Institute, 2021[129]). This allows those with the necessary skills to examine, test and verify how these systems operate, promoting accountability and trust. Some efforts are a step shy from releasing full source code but require the publication of thorough documentation than can have a similar effect.
In France, the Digital Republic law mandates government agencies to “make publicly available, in an open and easily re-usable format, the rules defining the main algorithmic processing used in the accomplishment of their mission when such processing is the basis of individual decisions”.70
The United Kingdom’s the Algorithmic Transparency Recording Standard (ATRS) mandates public sector organisations to transparently disclose details about their use of algorithmic methods in decision-making processes (OECD, 2023[130]).71
In Canada, the Directive on Automated Decision-making explicitly outlines, in detail, explainability requirements for AI systems differentiated by levels of risk, determined by use of an Algorithmic Impact Assessment tool (the results of which must also be published).72
The US policy discussed in Box 4.1 requires federal government agencies, when practicable and subject to some exclusions, to release and maintain AI code as open-source software in a public repository.
Transparency efforts can also be iterative, as seen in user-driven proactive publications. This type of proactive publications involves public entities choosing to proactively disclose information after receiving numerous similar requests. By publishing this information proactively, future requests are avoided, saving time for both officials and requestors. While its use in ensuring algorithmic transparency is not well-documented, this approach could be a relevant and cost-effective method for disclosing frequently requested information related to algorithms (GPAI, 2024[125])
Some types of disclosures may only be made for some users in context-specific situations. Automated responses triggered by interactions occur when information about an automated decision-making system is automatically provided during specific governmental processes. For instance, when someone engages with a public body's website or online platform for a service or administrative procedure involving an automated decision-making system, relevant information about the system could be automatically disclosed, without the user needing to request it explicitly (GPAI, 2024[125]).
Reactive transparency instruments
Reactive disclosure transparency instruments allow government to respond to specific requests for information from individuals, groups or authorities. Unlike proactive disclosure, this approach is initiated by external demand rather than the government's proactive disclosure (GPAI, 2024[125]). These involve submitting a request under the country’s relevant Access to Information (ATI) law to obtain information about an algorithm or its use, leveraging an existing policy instrument widely available in most contexts or countries.73 However, these regimes are not specifically designed for algorithmic transparency and can be ineffective when applied in this context (Valderrama, Hermosilla and Garrido, 2023[131]). For example, requesting information about an algorithm’s source code or its application is unlikely to produce the desired results due to issues with record management practices and common exceptions in ATI laws, such as conflicts with intellectual property restrictions and trade secrets of private providers of public services (Fink, 2017[132]; Brauneis and Goodman, 2017[133]).
Advancing accountability through risk management throughout the AI system lifecycle
For some government AI systems, the context of their development or use may pose a higher risk. This can relate to their scale (seriousness and probability of adverse impact), scope (breadth of application, such as number of individuals affected) or optionality (degree of choice as to whether to be subject to the effects of an AI system) (OECD, 2022[58]). Risk management procedures can help to identify which systems or contexts pose higher risks to mitigate them (OECD, 2023[134]).
Risk management for AI systems that may carry high risks should be informed by guidance on which levels of risk are acceptable for different uses and contexts. Risk management is needed both before — such as through ex ante impact and risk assessments — and after the deployment of AI systems. One of the most well-known examples is the US National Institute of Standards and Technology (NIST) (2023[135]) AI Risk Management Framework. This framework helps public or private organisations identify unique risks posed by generative AI and proposes actions for generative AI risk management that best align with their goals and priorities. While designed for the US, it has been translated in several other languages and used in other countries. The G7 Hiroshima AI Process International Guiding Principles for Advanced AI Systems and Code of Conduct also sets baseline standards to manage risks (2023[136]; [137]). The US also requires risk assessments for AI use and puts in place risk management practices for uses deemed “high-impact” (see Box 1.3) (US OMB, 2025[17]). As another national example, Türkiye’s Digital Transformation Office conducts the "AI Risk Management Recommendation" and "Trustworthy AI Seal" studies to closely monitor the use of AI for public benefit (OECD, 2024[3]).
AI experts recommend that governments make establishing or adopting such processes a top priority for mitigating AI harm (OECD, 2024[138]). Yet the proliferation of frameworks can make it difficult for governments to determine which is the most appropriate to follow. As calls for the development of risk management frameworks continue to grow, interoperability would enhance efficiency and reduce enforcement and compliance costs. The OECD (2023[139]) is actively working to promote policy coherence and interoperability among these frameworks.74
Impact assessments
Impact assessments, including Algorithmic Impact Assessments (AIA), can help public organisations to anticipate and evaluate how an algorithm may function in a specific context. They are evaluations of an AI system that use prompts, workshops, documents and discussions with the developers of an AI system and other stakeholders to explore how the system will affect people or society in positive or negative ways (Valderrama, Hermosilla and Garrido, 2023[131]). These tend to occur in the early stages of a system’s development before it is in use (ex-ante) but may occur after a system has been deployed (ex-post).
The primary aim of ex-ante AIAs is to assess the potential impacts of an algorithmic system on economies and societies and to provide a mechanism for accountability (Valderrama, Hermosilla and Garrido, 2023[131]). AIAs also help to better understand, classify and mitigate potential risks or harms associated with the algorithm. An example employing such a technique is Canada’s “Directive on Automated Decision-Making,” which requires an AIA that considers various factors and, in turn, provides a risk score that prescribes certain actions. The approach has been adapted in other countries, like Uruguay, where it informed their Guide for Algorithmic Impact Study (OECD/CAF, 2022[30]). In 2024, the Council of Europe issued the Human Rights, Democracy and Rule of Law Impact Assessment (HUDERIA).75 Its methodology provides for the creation of a risk assessment and a mitigation plan to minimise or eliminate the identified risks, protecting the public from potential harm.
Ex-ante AIAs are the most common approach in use today. The OECD has generally been supportive of this approach because they help to convert principles into actions.76 However, some argue that evaluating impacts is not the same as evaluating harms, and that in some instances, doing so can obscure true harms. That is, in part, because the metrics used for impact assessments often are not measuring emotional or psychological harm (Gupta et al., 2021[140]). This work also suggests that AIAs often do not consider the voices of all who may be impacted by an AI system. Other critics of AIAs argue that these assessments are not designed to continuously monitor the effects of deployed systems and adapt accordingly based on feedback and real-world ramifications (Mehta, Rogers and Gilbert, 2023[141]). Recent findings from the Ada Lovelace Institute (2025[23]), on lessons learned from six years of studying AI in government, further underscore this point. One of its main findings is that “AI is ‘sociotechnical’, in that it influences and is influenced by the social contexts in which it is deployed, often with unintended ripple effects. The success and acceptance of AI tools depends on their interaction with existing social systems, values and trust. Focusing exclusively on technical criteria while failing to consider these factors can lead to scepticism, and ultimately hinder adoption and use”.
To ensure AIAs are valuable, governments will need to conduct thorough sociotechnical assessments that engage appropriate stakeholders and integrate diverse perspectives (Lam et al., 2023[142]). However, ex-ante evaluations alone are often insufficient; they should be complemented by ex-post impact assessments that build on top of the ex-ante AIAs. This implies the development of mechanisms for continuous monitoring, adaptation and accountability, ensuring that AI systems evolve in response to real-world evidence. The US NIST has launched an Assessing Risks and Impacts of AI (ARIA) to advance sociotechnical testing and evaluation for AI.77
As related to ex-ante impact assessments, governments should consider in advance whether AI is the best solution to address a given problem, as discussed in the enablers section on “Determining whether AI is the best solution”.
Algorithmic audits
After deployment, it is important that governments continue monitoring system behaviours to determine what expected or unexpected risks may be materialising, and to ensure that government organisations carry on with implementation in a responsible manner. This usually takes the form of algorithmic audits. These involve independent, usually external, scrutiny of an AI system or the processes around it. These can be technical audits of the system’s inputs or outputs; compliance audits of whether an AI development team has completed processes or regulatory requirements; regulatory inspections by regulators to monitor behaviour of an AI system over time; or sociotechnical audits that evaluate the ways in which a system is impacting wider societal processes and contexts in which it is operating. Because audits usually occur after a system is in use, they serve as accountability mechanisms to verify whether a system behaves as developers intend or claim. Examples of algorithmic auditing process can be seen in Box 4.8.
Governments need to design their audits carefully, however. Inadequacies in AI auditing could create false confidence and “obscure problems with algorithmic systems and create a permission structure around poorly designed or implemented AI” (Goodman and Trehu, 2022[143]). Some experts argue that insufficient audits may prove meaningless or could exacerbate the problems they are designed to address, as well as being used as “audit washing” to give the appearance of due diligence.
AI system capabilities risk assessments
AI systems capabilities risk assessments are similar to impact assessments but look specifically at the likelihood of harmful outcomes occurring from an AI system due to a system’s capabilities. These also tend to occur in the early stages of a system’s development before it is in use but may occur after a system has been deployed. Such approaches should take into account risks related to AI systems’ limitations and capabilities, as well as contexts of use. For example, the government of Queensland, Australia issued a “foundational AI risk assessment guideline” for public servants.78 Risk assessment have become increasingly common in the private sector for advanced AI systems, such as with “responsible scaling policies” (RSPs), which commit to actions based on risk assessment of AI system capabilities (OECD, 2024[138]). When identifying potentially dangerous capabilities, RSPs often set thresholds that trigger actions to slow or cease development (METR, 2023[144]).79
Testing and assessment bodies and national, multilateral or regional bodies, such as AI safety or security institutes, are increasingly playing a role in facilitating risk management, including through building testing and assessment ecosystems (OECD, 2024[138]). For instance, the UK DSIT provided guidance on responsible capability scaling.80
Relative to the abundance of examples on governments using impact assessments or algorithmic audits, there are few examples of governments developing or using other AI risk assessments for AI in government. This may be because most government AI systems are procured from private sector companies, which may conduct risk assessments prior to putting their products on the market. It may also be because government AI systems, as discussed in Chapter 2, often do not leverage the latest approaches and instead rely on rules-based systems or more established ML approaches. Still, governments should consider and use such risk assessment approaches as they seek to use more advanced and capable AI systems. For example, in addition to structured testing, governments could conduct adversarial testing—commonly known as red-teaming—especially for complex foundation models used or acquired by governments, to proactively identify vulnerabilities, misuse risks and harmful system activities or outputs in sensitive government applications. Capability assessments could go also include language-specific evaluations, particularly for LLMs used in multilingual public services. These assessments could help ensure fair performance, address data imbalances in foundation models and help verify that models are suitable for the languages they serve.
Empowering oversight and advisory bodies to guide responsible AI
Oversight entities
The role of oversight bodies is evolving to meet new challenges posed by the expansion of AI use across government. For instance, Supreme Audit Institutions (SAIs)81 are increasingly required to extend their audit activities to include the scrutiny of AI algorithms that underpin government operations and public decisions. This shift necessitates a comprehensive evaluation of AI algorithms not only for their accuracy, security and effectiveness but also for their transparency and fairness. Box 4.8 illustrates how SAIs have adapted their role to conducting algorithmic audits and developing the frameworks for doing so.
Audits serve a range of purposes, including evaluating the performance of algorithmic systems against established standards, ensuring regulatory compliance, detecting unlawful discrimination, enhancing transparency and explainability, assessing security and robustness, evaluating broader social and ethical impacts and holding organisations accountable for their systems. Additionally, audits may be used to identify systemic failures in the use of algorithmic systems, offering valuable insights that can inform their application in another context (Ada Lovelace Institute, 2021[129]).
Box 4.8. Public sector algorithmic auditing approaches and tools
Copy link to Box 4.8. Public sector algorithmic auditing approaches and toolsFrance
In 2024, the French Cour des Comptes evaluated the integration of AI within the Ministry of Economy and Finance. Since 2015, the ministry has implemented 35 AI programmes aimed at detecting individual fraud risks, identifying business difficulties and providing faster responses to users. While technological aspects are well managed, the report found that ethical, human resources and environmental considerations remain underexplored. The Cour recommended robust ministerial oversight to ensure trustworthy public AI, better assessment and transparent allocation of productivity gains, and proactive anticipation of AI's impact on staff roles
Netherlands
The independent Netherlands Court of Audit has audited the Dutch government’s use of algorithms. Through its evaluation, the Court developed an audit framework specifically designed to assess the use of algorithms within government. The framework evaluates a wide range of aspects, from governance and accountability to technical aspects like the AI systems and data, privacy, IT general controls and ethical considerations. The framework is being used across several Dutch institutions to guide their development of new algorithms. In 2022, it audited nine major public sector algorithms and found that six (67%) did not meet basic requirements, exposing the government to bias, data leaks and unauthorised access.
Sweden
The National Audit Office of Sweden conducted an audit of three automated decision-making systems used by the Swedish Government: the parental benefit at the Swedish Social Insurance Agency, annual income taxation of private individuals at the Swedish Tax Agency and driving licence learner’s permits at the Swedish Transport Agency. The audit aimed to assess whether these systems operated effectively and efficiently while safeguarding legal certainty in decision-making. It evaluated the performance of the systems against legislative standards for efficiency and legal certainty, identifying specific shortcomings.
United Kingdom
In 2024, the UK National Audit Office (NAO) published a Report on AI in Government examining how effectively the UK government bodies are using AI for public services. It found that only 21% of the 87 bodies analysed had an AI strategy, as required by policy, while 61% had plans to develop one. Notable initiatives include the Department for Work and Pensions’ establishment of an AI steering board and a separate advice and assurance group, and the Ministry of Justice setting up of an AI steering group to review individual AI use cases, coupled with the adoption of algorithm consultation panels, including end users and data ethicists.
United States
The US Government Accountability Office (GAO) issued Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities, which identifies key accountability practices — centred around the principles of governance, data, performance and monitoring — to help federal agencies and others use AI responsibly.
Cross-border collaboration
The Supreme Audit Institutions (SAIs) of Finland, Germany, the Netherlands, Norway and the United Kingdom collectively issue Auditing machine learning algorithms: A white paper for public auditors, which is updated over time.
Source: https://www.ccomptes.fr/fr/publications/lintelligence-artificielle-dans-les-politiques-publiques-lexemple-du-ministere-de, https://english.rekenkamer.nl/publications/reports/2021/01/26/understanding-algorithms, https://www.riksrevisionen.se/download/18.2008b69c18bd0f6ed3f25040/1608291082190/RiR_2020_22_en-GB.pdf, (Ada Lovelace Institute, 2021[129]), https://www.gao.gov/products/gao-21-519sp, https://www.auditingalgorithms.net, (OECD, 2023[130]), https://www.nao.org.uk/reports/use-of-artificial-intelligence-in-government.
Oversight bodies serve as platforms that bring together diverse expertise and perspectives, which is important for the effectiveness of any accountability mechanism (Ada Lovelace Institute, 2021[129]). For example:
In 2023, Spain created the independent Spanish Agency for the Supervision of AI (AESIA) (Pehlivan and Valín, 2023[145]).
In 2024, the EU AI Act established a supervisory European Artificial Intelligence Board (AI Board).82
Ombudsman are playing a key oversight role in public organisations’ use of AI. For instance, the European Ombudsman has investigated the use of AI by the EC,83 and the Dutch Ombudsman and Data Protection Authority have actively examined algorithmic decision-making and AI’s impact on citizens' rights in the Netherlands.84
Parliaments and their oversight committees are taking a more active role in different countries, including Australia and the United Kingdom,85 where they have conducted inquiries into AI and algorithmic transparency. Some parliamentary bodies establish ad hoc committees to investigate specific AI-related concerns, particularly when emerging risks or controversies arise.
Advisory bodies
Advisory bodies can also help to ensure governments are using AI in a trustworthy manner. They can provide expert guidance, insights and recommendations in response to specific requests from government on emerging issues in AI. Some can be more hands-on, with Ireland’s AI Advisory Council, for example, developing and delivering its own workplan of advice.86 Other examples include:
New Zealand Data Ethics Advisory Group offers non-binding guidance on the use of algorithmic systems by public agencies. Its recommendations address issues such as human rights compliance, scientific validity, privacy and ethics (Ada Lovelace Institute, 2021[129]). 87
Greece established a High-Level Advisory Committee on AI in November 2023, which plays a pivotal role in shaping national AI policy. It focuses on promoting economic and societal growth while addressing the risks associated with unchecked AI use. It has developed “A Blueprint for Greece's AI Transformation” (2024[47]). The country has also established a National Committee of Technoethics and Bioethics to provide independent expertise in providing strategic guidance and recommendations on the ethical implications of AI, among other things.88
Spain (2024[146]) has created the Artificial Intelligence Advisory Council as a formal independent body to provide the government with analysis, advice and support on the topic of AI. It held its first meeting in June 2024.
The government of Western Australia formed an AI Advisory Board in 2025 to provide advice to Western Australian government agencies on risk mitigation and to support the safe, responsible and ethical use of AI in the Western Australian public sector.89
AI safety and security institutes and units
Governments around the world are also focusing on this topic through efforts including the establishment of AI safety and security institutes and units in several countries (OECD, 2024[138]). For instance, Canada, the EU, France, Japan, Korea Singapore, the UK and the US have each launched such an institute or unit,90 with these and three additional countries deciding to form an international network of institutes (UK DSIT, 2024[147]). The mandate of such institutes or units is generally broader than AI’s use in government, but some do include a focus on this. For instance, the upcoming IndiaAI Safety Institute will serve as a think-and-do tank for governance innovation and offer policy, legal and technical guidance to public institutions deploying AI, among other objectives.91
Engagement to shape strategic and responsible AI
Copy link to Engagement to shape strategic and responsible AIAI systems have the potential to radically reshape the interaction between citizens and their governments, and among citizens themselves. Key stakeholders, including members of the public, should have a say in how governments use and govern AI-based technologies. Involving citizens and stakeholders can lead to greater trust in and legitimacy of AI systems used by government, as well as to AI systems that better reflect the needs of all (OECD, 2024[148]). Such efforts can help promote transparency, accountability and fairness in AI systems, preventing biases and potential harms.
Engaging citizens and stakeholders – such as scientists and engineers, affected communities, investors, companies or institutions — can enrich the understanding of issues related to AI technology, help policymakers anticipate problems of public acceptance and promote good communication (OECD, 2024[148]). This engagement is crucial for aligning the use and governance of AI with the goals and needs of society. It helps stakeholders to understand, question and influence how algorithms and AI governance mechanisms are designed and operated.
Early engagement allows for a comprehensive assessment of potential consequences and risks for various groups, fostering more inclusive and ethical development of AI systems and governance. This collaborative approach helps to identify and address concerns from diverse perspectives, ensuring that AI systems are designed and implemented in a way that is both responsible and beneficial to all stakeholders involved (OECD, 2021[112]). Citizens and stakeholders can be involved in different aspects of policymaking:
Agenda-setting. In the United Kingdom, the Centre for Data Ethics and Innovation has been engaging citizens through the Public Attitudes to Data and AI tracker survey, now at its fourth wave, which can inform the government’s approach to future policy development.92
Technology design. In 2020, the French government launched PIAF, a collaborative initiative with citizens, academia and civil society to build databases in French language to train AI systems.93 Greece’s Pharos AI Factory is being shaped to be a central hub a central hub collaboration, knowledge sharing, resource pooling and joint project development among the public sector, academic institutions and private industry.
Technology assessment. In the United States, the Expert & Citizen Assessment of Science & Technology (ECAST) Participatory Technology Assessment is bringing public perspectives to bear on critical government science and technology decisions. 94
Regulation. In 2024, the European Union opened a multi-stakeholder consultation on trustworthy general-purpose AI models under the AI Act,95 as well as a call for expressions of interest to participate in the drawing-up of the first general-purpose AI Code of Practice.96
Governments have several options to consider in enhancing public engagement when shaping the strategic and operational development of AI. The sections below explore deliberative processes, such as citizen assemblies, engagement with civil servants and the involvement of users in AI development. In addition to civic engagement for government AI use and governance, governments can also use AI for civic engagement, which is discussed in depth in Chapter 5 (section on “AI in civic participation and open government”).
Citizen assemblies
Citizens assemblies, also called citizens juries or panels, generally refer to a randomly selected group of people who are broadly representative of a community spending significant time learning and collaborating through facilitated deliberation to form collective recommendations for policymakers (OECD, 2020[149]).
A representative deliberative process is most suited to address issues such as values-based dilemmas; complex problems that require trade-offs and affect a range of groups in different ways; or long-term questions that go beyond electoral cycles (OECD, 2022[150]). AI’s development and governance are well suited for such a process. AI involves ethical and societal discussions to decide on its uses in specific contexts (e.g. facial recognition) and can be considered as a technical issue with trade-offs between innovation and regulation. Most importantly, the adoption of AI technologies will certainly shape social interactions in the long term, with impacts that can span through generations.
As an example of a citizen assembly on AI, in 2024, the Belgian Presidency of the Council of the European Union convened a representative group of 60 Belgians to collect citizens’ views on AI with the bloc (BeEU, 2024[151]). In another example, in the United States in 2023, a professor from Syracuse University partnered with the Center for New Democratic Processes (CNDP) to conduct the first national deliberative event on AI in the United States (Atwood and Bozentko, 2023[152]). International and sub-national examples of assemblies to shape AI governance also exist. In 2025 and 2026, the Global Coalition for Inclusive AI, a partnership between the Stanford Deliberative Democracy Lab and the Missions Publiques citizen participation consulting firm, will conduct deliberative assemblies that intend to reach more than 10 000 citizens in more than 100 countries. Follow-up events with decision-makers are planned to ensure the impact of the deliberative processes (Vergne and Siu, 2025[153]). Citizen participation is also a strong component of another emerging trend in AI governance: AI Localism, in which communities, including local governments, act to discuss and regulate the use of AI technologies according to their needs (Marcucci, Kalkar and Verhulst, 2022[154]). For example, in 2018, Mexico City’s innovation lab, Laboratorio para la Ciudad de Mexico (LabCDMX), conducted a deliberative exercise build an Anticipatory Governance Framework for Mexico City on AI (Ramos, 2018[155]).
Engaging with civil servants and social partners
Engaging civil servants, who are at the frontline of public service delivery, is critical to the use of AI in government. Their roles and responsibilities are directly impacted by the introduction of AI technologies, and their insights and experiences are invaluable in shaping a responsible and effective AI use. As discussed in Chapter 1, the automation, creation or transformation of tasks due to the introduction of AI can bring about opportunities for improved efficiency and effectiveness. But those changes can also raise concerns about job security, work conditions and workers’ right for these individuals.
The design and implementation of AI initiatives should be carried out in a manner that respects labour rights, promotes civil servants’ well-being and uses their insights (OECD, 2023[118]). Transparent and inclusive dialogue with public servants and social partners, such as trade unions and employee associations, is important for achieving this. Workers should be informed about the objectives of AI initiatives, the potential impacts on their roles and the measures in place to mitigate any negative impacts. They should also be given opportunities to voice their concerns, provide feedback and contribute their insights to AI approaches.
Social dialogue and collective bargaining are essential for successful AI adoption in government (OECD, 2023[118]). They are key to building trust and effective collaboration among public servants and ensuring access to training to develop the skills and capabilities needed to work with AI. Social partners should also be involved, as they play a critical role in negotiating conditions for work.
Involving users in AI development
Involving end-users in AI development in government helps ensure that AI solutions are user-centric and effectively tackle real-world issues. Figure 4.5 shows key steps to understanding users and their needs. Governments can use research methods — such as reviewing existing evidence, conducting interviews and observing users — to develop a deep understanding of these aspects, thereby enhancing the relevance and acceptance of AI applications (OECD/UNESCO, 2024[29]). In line with OECD Good Practice Principles for Service Design and Delivery in the Digital Age, users can help identify insights for iterating the design of services, simplifying underlying procedures and increasing access for all user groups (2022[156]). Moreover, making the design and delivery of AI-driven services a participatory and inclusive process empowers users, giving them an active role in co-creating and co-designing public services. This can include implementing mechanisms to involve users in testing, iterating and improving the service, as well as conducting rigorous and ethically-designed experiments with user groups to help ensure the use of AI has its intended effects and that any risks are identified on a small scale before scaling more broadly (see further discussion on “Creating spaces to experiment” above).
Figure 4.5. Key steps to understand the users and their needs in government AI developments
Copy link to Figure 4.5. Key steps to understand the users and their needs in government AI developmentsCollaborating across borders
Like other digital technologies, AI knows no borders. Its risks and impacts, as well as its potential positive uses, can be transnational.97 Cross-border engagement and collaboration can be instrumental in bridging knowledge and development gaps among countries, tackling common and complex challenges, managing risks and implementing innovative policies and services. International co-operation can help in building government AI capacity across the globe and in specific regions, such as been seen in Latin America and the Caribbean (LAC) (OECD/CAF, 2022[30]). This can include sharing open algorithms, infrastructure and intergovernmental datasets, as well as conducting joint efforts for the responsible development of emerging technologies. OECD (2021[157]) work has identified three mechanisms governments are using to connect and collaborate in order to tackle issues that cut across borders between administrative entities or area, including in areas related to digital innovation and AI in government.
Cross-border governance bodies can address complex issues or those spanning the remit of multiple jurisdictions, such as seeking to integrate and harmonise AI approaches and interoperability. Governance bodies allow governments to coordinate and harness the collective efforts of actors divided by borders. This can be seen in the European Union’s creation of the AI Office responsible for implementing, supervising and enforcing the AI Act. Beyond the EU sphere, the OECD Working Party of Senior Digital Government Officials (E-Leaders) and its thematic group on AI have been platforms for countries to work together to develop guidance and analytical products on AI in government, with the potential to propose relevant OECD non-binding legal instruments. Further, countries have been collaborating on international standards enforced by international bodies, such as ISO/IEC 42001 on AI management systems, relevant for public sector agencies as well as companies or non-profits. However, there is currently no evidence of formal cross-border governance bodies set up to specifically address government development and use of AI.
Second, countries are also using innovative networks tackling cross-border collaboration. Networks are horizontal, often informal and ground-up structures that allow for the organic convergence of ideas and expertise across borders. For instance, the European Public Administration Network (EUPAN) promotes knowledge sharing on AI.98
And third, some countries are exploring emerging governance systems dynamics, which are entirely new ways of working together across borders. For instance, governments have worked together to develop collective digital infrastructure and data sharing approaches to promote seamless operations internationally. The European Union is perhaps the most advanced in this area, as its Interoperable Europe Act, which came into force in April 2024, establishes a framework to enhance interoperability within public sector organisations, ensuring seamless cross-border services. Key elements include creating an interoperability governance structure, promoting innovation and knowledge exchange, implementing regulatory sandboxes for testing solutions, and mandating interoperability assessments for public administration.99
A framework for trustworthy AI in government
Copy link to A framework for trustworthy AI in governmentWhen taken together, the policy measures discussed in this chapter form a Framework for Trustworthy AI in Government that can help governments align their actions in developing and adopting AI with the value-based principles and recommendations laid out in the OECD AI Principles (2024[158]). The framework outlines how governments can seize AI’s promise of productivity, responsiveness and accountability by putting in place the right mix of enablers, safeguards and engagement mechanisms.
Figure 4.6 presents a visual representation of the framework and Table 4.1 details the policy questions and measures that underpin its elements.
Figure 4.6. OECD Framework for Trustworthy Artificial Intelligence in Government
Copy link to Figure 4.6. OECD Framework for Trustworthy Artificial Intelligence in GovernmentTable 4.1. Policy questions and measures underpinning the Framework for Trustworthy AI in Government
Copy link to Table 4.1. Policy questions and measures underpinning the Framework for Trustworthy AI in Government|
Policy question |
Policy measure |
Description |
|---|---|---|
|
What concrete policy actions and tools can governments develop to address existing challenges for a trustworthy use of AI in government? |
Enablers |
Policy actions and tools for areas where policymakers currently identify constraints and shortcomings, in order to establish a solid enabling environment and unlock the full-scale adoption of AI in government. These include governance, infrastructure, data, skills and talent, investments, procurement and partnerships. |
|
Guardrails |
Policy tools that governments can consider developing for a responsible, trustworthy and human-centred use of AI in government. These may include non-binding instruments; laws and regulations; transparency and risk management instruments; or oversight (beyond the executive) and monitoring (within the executive) bodies. |
|
|
Who should governments engage when developing and implementing the enablers and guardrails, as well as individual use cases, for the trustworthy use of AI in government? |
Engagement |
Different stakeholders to engage in building the foundations for a responsible use of AI in government. Various actors across government (e.g. ministries, civil servants, sub-national governments), in the broader ecosystem and beyond national jurisdictions would need to be engaged through targeted actions to effectively address policy opportunities and challenges related to the use of AI in government. |
|
What impact does government strive to achieve when using trustworthy AI? |
Impact |
AI in government can help to increase productivity, responsiveness and accountability. |
Source: (OECD, 2024[3]).
Future OECD work on these issues
Copy link to Future OECD work on these issuesThe enablers, guardrails and engagement processes that comprise the OECD Framework for Trustworthy AI in Government serve as a strong foundation upon which governments can take a strategic and responsible approach to AI. However, one report cannot fully convey the complexities of this vast spectrum of activities needed to adopt rapidly evolving AI technology while managing both critical risks and significant implementation challenges. Future OECD work will address elements of the framework more in-depth, with actionable insights about how governments can put in place these foundations. For instance, a report on AI experimentation in government is already underway and will be released in the coming months.
Critically, governments need to be able to identify where to prioritise AI investments and resources based on various trade-offs when considering the potential benefits and risks of particular AI applications. The OECD has recommended (2024[3]) and continues to encourage governments to prioritise high-benefit, low-risk applications of AI, especially when building an initial level of maturity. However, most do not have the processes in place for holistic measurement of potential or realised results — efficiency of spend, quality of services, potential harms — that would allow them to make these determinations. This should be a priority for governments as a cross-cutting step that helps unlock the potential of AI, and it will be a focus of future OECD work.
References
[23] Ada Lovelace Institute (2025), Learn fast and build things: Lessons from six years of studying AI in the public sector, Ada Lovelace Institute, https://www.adalovelaceinstitute.org/policy-briefing/public-sector-ai/.
[129] Ada Lovelace Institute (2021), Algorithmic accountability for the public sector, Ada Lovelace Institute (Ada), AI Now Institute (AI Now), and Open Government Partnerships (OGP), https://www.adalovelaceinstitute.org/report/algorithmic-accountability-public-sector/.
[65] African Union (2024), Continental Artificial Intelligence Strategy, https://au.int/en/documents/20240809/continental-artificial-intelligence-strategy.
[46] Agency for Digital Government (2024), A Common Danish Language Resource, https://en.digst.dk/digital-governance/new-technologies/a-common-danish-language-resource/ (accessed on November 2024).
[53] AI Hub (2024), “AI Data Finder”, aihub.or.kr, https://www.aihub.or.kr/aihubdata/data/list.do?currMenu=115&topMenu=100 (accessed on November 2024).
[51] AI Sweden (2024), A shared digital assistant for the public sector, https://www.ai.se/en/project/shared-digital-assistant-public-sector (accessed on October 2024).
[152] Atwood, S. and K. Bozentko (2023), U.S. Public Assembly on High Risk Artificial Intelligence 2023 Event Report, https://www.cndp.us/wp-content/uploads/2023/12/2023-U.S.-PUBLIC-ASSEMBLY-ON-HIGH-RISK-AI-EVENT-REPORT-final.pdf.
[101] Australia DTA (2024), APS trials generative AI to explore safe and responsible use cases for government.
[36] Australian Government (2024), Evaluation of the whole-of-government trial of Microsoft 365 Copilot, https://www.digital.gov.au/initiatives/copilot-trial.
[151] BeEU (2024), A citizen’s view of artificial intelligence within the EU, https://belgian-presidency.consilium.europa.eu/media/lzxauu4i/rapport-ia-en-web-v2.pdf.
[4] Berryhill, J. et al. (2019), “Hello, World: Artificial intelligence and its use in the public sector”, OECD Working Papers on Public Governance, No. 36, OECD Publishing, Paris, https://doi.org/10.1787/726fd39d-en.
[133] Brauneis, R. and E. Goodman (2017), “Algorithmic Transparency for the Smart City”, SSRN Electronic Journal, https://doi.org/10.2139/ssrn.3012499.
[68] Brizuela, A. et al. (2025), Analysis of the generative AI landscape in the European public sector, European Commission, https://op.europa.eu/s/z4XY.
[126] Chilean Transparency Council (CPLT) (2024), CPLT lanza recomendaciones de transparencia algorítmica en servicios públicos, https://www.consejotransparencia.cl/cplt-lanza-recomendaciones-de-transparencia-algoritmica-en-servicios-publicos/.
[41] CNIL (2023), « Bac à sable » données personnelles : la CNIL lance un appel à projets sur l’intelligence artificielle dans les services publics, https://www.cnil.fr/fr/bac-sable-donnees-personnelles-la-cnil-lance-un-appel-projets-sur-lintelligence-artificielle-dans.
[42] CNIL (2023), « Bac à sable » intelligence artificielle et services publics : la CNIL accompagne 8 projets innovants, https://www.cnil.fr/fr/bac-sable-intelligence-artificielle-et-services-publics-la-cnil-accompagne-8-projets-innovants.
[31] CONPES (2025), CONPES 4144, Política Nacional de Inteligencia Artificial, https://colaboracion.dnp.gov.co/CDT/Conpes/Econ%C3%B3micos/4144.pdf.
[114] Corrêa, N. et al. (2023), “Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance”, Patterns, Vol. 4/10, p. 100857, https://doi.org/10.1016/j.patter.2023.100857.
[119] Council of Europe (2024), Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, https://www.coe.int/en/web/artificial-intelligence/the-framework-convention-on-artificial-intelligence.
[97] Cover-Kus, H. (2024), UK Government doubles down efforts to deploy AI across the public sector, https://www.techuk.org/resource/uk-government-doubles-down-efforts-to-deploy-ai-across-the-public-sector.html.
[124] Deleware General Assembly (2024), House Bill 333: An Act to Amend Title 29 of the Deleware Code Relating to the Artificial Intelligence Commission, https://legis.delaware.gov/BillDetail/140866.
[60] Digdir (2024), The Norwegian Resource Centre for Sharing and Use of Data, https://www.digdir.no/datadeling/norwegian-resource-centre-sharing-and-use-data/2766 (accessed on November 2024).
[73] Dombo, F. (2023), Public Sector Considerations for Successful Cloud Adoption, https://news.sap.com/africa/2023/05/public-sector-considerations-for-successful-cloud-adoption/.
[87] EC (2025), A pioneering AI project awarded for opening Large Language Models to European languages, https://digital-strategy.ec.europa.eu/en/news/pioneering-ai-project-awarded-opening-large-language-models-european-languages (accessed on 10 March 2025).
[55] EC (2025), Common European Data Spaces, European Commission, https://digital-strategy.ec.europa.eu/en/policies/data-spaces.
[67] EC (2025), The AI Continent Action Plan, European Commission, https://digital-strategy.ec.europa.eu/en/library/ai-continent-action-plan.
[103] EC (2025), Updated EU AI model contractual clauses, European Commission, https://public-buyers-community.ec.europa.eu/communities/procurement-ai/resources/updated-eu-ai-model-contractual-clauses.
[96] EC (2024), Adopt AI study, European Commission, https://op.europa.eu/s/z2tg.
[110] EC (2024), GovTech: influencing factors, common requirements and recommendations, European Commission, https://interoperable-europe.ec.europa.eu/collection/public-sector-tech-watch/document/govtech-influencing-factors-common-requirements-and-recommendations.
[8] EC (2024), What factors influence perceived artificial intelligence adoption by public managers, European Commission Joint Research Centre, https://publications.jrc.ec.europa.eu/repository/bitstream/JRC138684/JRC138684_01.pdf.
[102] EC (2023), EU model contractual AI clauses to pilot in procurements of AI, https://public-buyers-community.ec.europa.eu/communities/procurement-ai/resources/eu-model-contractual-ai-clauses-pilot-procurements-ai.
[132] Fink, K. (2017), “Opening the government’s black boxes: freedom of information and algorithmic accountability”, Information, Communication & Society, Vol. 21/10, pp. 1453-1471, https://doi.org/10.1080/1369118x.2017.1330418.
[64] France Élysée (2025), Make France and AI Powerhouse, https://www.elysee.fr/admin/upload/default/0001/17/d9c1462e7337d353f918aac7d654b896b77c5349.pdf.
[70] Frazier, K. (2025), The Dangers of AI Sovereignty, https://www.lawfaremedia.org/article/the-dangers-of-ai-sovereignty.
[137] G7 (2023), Hiroshima Process International Code of Conduct for Advanced AI Systems, https://digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-code-conduct-advanced-ai-systems.
[136] G7 (2023), Hiroshima Process International Guiding Principles for Advanced AI system, https://digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-guiding-principles-advanced-ai-system.
[92] Gartner (2024), Compare AI Software Spending in the Government Industry, 2023-2027, https://www.gartner.com/en/documents/5318363.
[86] Gob.cl (2025), Latam GPT: Learn about the Latin American AI model developed in Chile, https://www.gob.cl/en/news/latam-gpt-learn-about-the-latin-american-ai-model-developed-in-chile/ (accessed on 10 March 2025).
[143] Goodman, E. and J. Trehu (2022), AI Audit-Washing and Accountability, https://www.gmfus.org/news/ai-audit-washing-and-accountability.
[13] Government of Canada (2025), AI Strategy for the Federal Public Service, https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/gc-ai-strategy-overview.html.
[82] Government of Greece (2025), The implementation of the new National Supercomputer “DAIDALOS” and the Data Center at the Lavrio TPP begins - The country acquires one of the most powerful computing systems in Europe, https://mindigital.gr/archives/7331.
[47] Government of Greece (2024), A Blueprint for Greece’s AI Transformation, https://foresight.gov.gr/wp-content/uploads/2024/11/Blueprint_GREECES_AI_TRANSFORMATION.pdf.
[19] Government of Ireland (2025), Guidelines for the Responsible Use of AI in the Public Service, https://www.gov.ie/en/department-of-public-expenditure-ndp-delivery-and-reform/publications/guidelines-for-the-responsible-use-of-ai-in-the-public-service.
[122] Government of Korea (2024), A New Chapter in the Age of AI: Basic Act on AI Passed at the National Assembly‘s Plenary Session, https://www.msit.go.kr/eng/bbs/view.do?bbsSeqNo=42&mId=4&mPid=2&nttSeqNo=1071.
[18] Government of New Zealand (2025), Public Service AI Framework, https://www.digital.govt.nz/standards-and-guidance/technology-and-architecture/artificial-intelligence/public-service-artificial-intelligence-framework.
[146] Government of Spain (2024), Spain sets up the International Artificial Intelligence Advisory Council, https://www.lamoncloa.gob.es/lang/en/gobierno/news/Paginas/2024/20240621-ai-advisory-council-meeting.aspx.
[109] Government of Spain (2024), The Government approves the Artificial Intelligence Strategy 2024, https://portal.mineco.gob.es/en-us/comunicacion/Pages/20240514-Gobierno-aprueba-Estrategia-IA-2024.aspx.
[14] Government of Switzerland (2025), Strategy Use of AI systems in the Federal Administration, https://www.bk.admin.ch/bk/en/home/digitale-transformation-ikt-lenkung/ikt-vorgaben/strategien-teilstrategien/sb021-strategie-einsatz-von-ki-systemen-in-der-bundesverwaltung.html.
[12] Government of the Dominican Republic (2024), Estrategie Nacional de Inteligencia Artificial, https://innovacionrd.gob.do/enia/.
[15] Government of Uruguay (2021), AI Strategy for the Digital Government, https://www.gub.uy/agencia-gobierno-electronico-sociedad-informacion-conocimiento/comunicacion/publicaciones/ia-strategy-english-version/ia-strategy-english-version/ai-strategy-for.
[125] GPAI (2024), Algorithmic Transparency in the Public Sector: A state-of-the-art report of algorithmic transparency instruments, Global Partnership on Artificial Intelligence, https://gpai.ai/projects/responsible-ai/algorithmic-transparency-in-the-public-sector/algorithmic-transparency-in-the-public-sector.pdf.
[140] Gupta, A. et al. (2021), The State of AI Ethics Report (Volume 5), https://arxiv.org/abs/2108.03929.
[79] Hassani, A. et al. (2022), Escaping the Big Data Paradigm with Compact Transformers, https://arxiv.org/abs/2104.05704.
[77] IEA (2025), Energy and AI, https://iea.blob.core.windows.net/assets/34eac603-ecf1-464f-b813-2ecceb8f81c2/EnergyandAI.pdf.
[76] IEA (2023), Data Centres and Data Transmission Networks, https://www.iea.org/energy-system/buildings/data-centres-and-data-transmission-networks.
[80] Jones, N. (2025), Where AI Is Now: Smaller, Better, Cheaper Models, https://www.scientificamerican.com/article/ai-report-highlights-smaller-better-cheaper-models.
[69] Komaitis, K., E. Ponce de León and K. Thibaut (2024), The sovereignty trap, https://www.atlanticcouncil.org/blogs/geotech-cues/the-sovereignty-trap.
[142] Lam, M. et al. (2023), “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising”, Proceedings of the ACM on Human-Computer Interaction, Vol. 7/CSCW2, pp. 1-37, https://doi.org/10.1145/3610209.
[116] Leslie, D. (2019), Understanding artificial intelligence ethics and safety, The Alan Turing Institute, https://doi.org/10.5281/zenodo.3240529.
[63] Letzing, J. (2024), What is ‘sovereign AI’ and why is the concept so appealing (and fraught)?, https://www.weforum.org/stories/2024/11/what-is-sovereign-ai-and-why-is-the-concept-so-appealing-and-fraught/.
[90] Long, D. and B. Magerko (2020), “What is AI Literacy? Competencies and Design Considerations”, Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1-16, https://doi.org/10.1145/3313831.3376727.
[48] M Saiful Bari, Y. (2024), “ALLaM: Large Language Models for Arabic and English”.
[154] Marcucci, S., U. Kalkar and S. Verhulst (2022), AI Localism in Practice: Examining How Cities Govern AI, https://files.thegovlab.org/ailocalism-in-practice.pdf.
[75] McKinsey (2024), AI power: Expanding data center capacity to meet growing demand, https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/ai-power-expanding-data-center-capacity-to-meet-growing-demand.
[141] Mehta, S., A. Rogers and T. Gilbert (2023), Dynamic Documentation for AI Systems, https://arxiv.org/abs/2303.10854.
[144] METR (2023), Responsible Scaling Policies (RSPs), https://metr.org/blog/2023-09-26-rsp/.
[78] Metz, C. et al. (2025), How A.I. is Changing the Way the World Builds Computers, https://www.nytimes.com/interactive/2025/03/16/technology/ai-data-centers.html.
[100] Monteiro, B., A. Hlacs and P. Boéchat (2024), “Public procurement for public sector innovation: Facilitating innovators’ access to innovation procurement”, OECD Working Papers on Public Governance, No. 80, OECD Publishing, Paris, https://doi.org/10.1787/9aad76b7-en.
[84] Montgomery, C., F. Rossi and J. New (2023), A Policymaker’s Guide to Foundation Models, https://newsroom.ibm.com/Whitepaper-A-Policymakers-Guide-to-Foundation-Models.
[26] Mulgan, G. (2019), Intelligence as an outcome not an input, http://www.nesta.org.uk/blog/intelligence-outcome-not-input.
[127] Netherlands Court of Audit (2024), Focus on AI in central government, https://english.rekenkamer.nl/publications/reports/2024/10/16/focus-on-ai-in-central-government.
[135] NIST (2023), AI Risk Management Framework, https://www.nist.gov/itl/ai-risk-management-framework.
[88] Noor, E. and B. Kanitroj (2025), Speaking in Code: Contextualizing Large Language Models in Southeast Asia, https://carnegieendowment.org/research/2025/01/speaking-in-code-contextualizing-large-language-models-in-southeast-asia.
[93] OECD (2025), “Effectively Managing Investments in Digital Government: An OECD Policy Framework”, OECD Public Governance Policy Papers, No. 76, OECD Publishing, Paris, https://doi.org/10.1787/5c324e91-en.
[50] OECD (2025), Enhancing Access to and Sharing of Data in the Age of Artificial Intelligence, OECD Publishing Paris, https://www.oecd.org/en/publications/enhancing-access-to-and-sharing-of-data-in-the-age-of-artificial-intelligence_23a70dca-en.html.
[120] OECD (2025), OECD Regulatory Policy Outlook 2025, OECD Publishing, Paris, https://doi.org/10.1787/56b60e39-en.
[74] OECD (2024), “2023 OECD Digital Government Index: Results and key findings”, OECD Public Governance Policy Papers, No. 44, OECD Publishing, Paris, https://doi.org/10.1787/1a89ed5e-en.
[40] OECD (2024), “AI, data governance and privacy: Synergies and areas of international co-operation”, OECD Artificial Intelligence Papers, No. 22, OECD Publishing, Paris, https://doi.org/10.1787/2476b1a4-en.
[138] OECD (2024), “Assessing potential future artificial intelligence risks, benefits and policy imperatives”, OECD Artificial Intelligence Papers, No. 27, OECD Publishing, Paris, https://doi.org/10.1787/3f4e3dfb-en.
[62] OECD (2024), “Digital public infrastructure for digital governments”, OECD Public Governance Policy Papers, No. 68, OECD Publishing, Paris, https://doi.org/10.1787/ff525dc8-en.
[108] OECD (2024), Enabling Digital Innovation in Government: The OECD GovTech Policy Framework, OECD Digital Government Studies, OECD Publishing, Paris, https://doi.org/10.1787/a51eb9b2-en.
[2] OECD (2024), “Fixing frictions: ‘sludge audits’ around the world: How governments are using behavioural science to reduce psychological burdens in public services”, OECD Public Governance Policy Papers, No. 48, OECD Publishing, Paris, https://doi.org/10.1787/5e9bb35c-en.
[148] OECD (2024), Framework for Anticipatory Governance of Emerging Technologies, OECD Publishing, https://doi.org/10.1787/0248ead5-en.
[54] OECD (2024), G20 Compendium on Data Access and Sharing Across the Public Sector and with the Private Sector for Public Interest, OECD Publishing, Paris, https://www.oecd.org/en/publications/g20-compendium-on-data-access-and-sharing-across-the-public-sector-and-with-the-private-sector-for-public-interest_df1031a4-en.html.
[3] OECD (2024), “Governing with Artificial Intelligence: Are governments ready?”, OECD Artificial Intelligence Papers, No. 20, OECD Publishing, Paris, https://doi.org/10.1787/26324bc2-en.
[94] OECD (2024), OECD Artificial Intelligence Review of Germany, OECD Publishing, Paris, https://doi.org/10.1787/609808d6-en.
[83] OECD (2024), OECD Digital Economy Outlook 2024 (Volume 1): Embracing the Technology Frontier, OECD Publishing, Paris, https://doi.org/10.1787/a1689dc5-en.
[106] OECD (2024), Public procurement, https://www.oecd.org/en/topics/public-procurement.html.
[158] OECD (2024), Recommendation of the Council on Artificial Intelligence, OECD Publishing, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.
[99] OECD (2024), The Digital Transformation of Norway’s Public Sector, OECD Digital Government Studies, OECD Publishing, Paris, https://doi.org/10.1787/1620e542-en.
[52] OECD (2023), “2023 OECD Open, Useful and Re-usable data (OURdata) Index: Results and key findings”, OECD Public Governance Policy Papers, No. 43, OECD Publishing, Paris, https://doi.org/10.1787/a37f51c3-en.
[134] OECD (2023), “Advancing accountability in AI: Governing and managing risks throughout the lifecycle for trustworthy AI”, OECD Digital Economy Papers, No. 349, OECD Publishing, Paris, https://doi.org/10.1787/2448f04b-en.
[44] OECD (2023), “AI language models: Technological, socio-economic and policy considerations”, OECD Digital Economy Papers, No. 352, OECD Publishing, Paris, https://doi.org/10.1787/13d38f92-en.
[139] OECD (2023), “Common guideposts to promote interoperability in AI risk management”, OECD Artificial Intelligence Papers, No. 5, OECD Publishing, Paris, https://doi.org/10.1787/ba602d18-en.
[130] OECD (2023), Global Trends in Government Innovation 2023, OECD Public Governance Reviews, OECD Publishing, Paris, https://doi.org/10.1787/0655b570-en.
[118] OECD (2023), OECD Employment Outlook 2023: Artificial Intelligence and the Labour Market, OECD Publishing, Paris, https://doi.org/10.1787/08785bba-en.
[35] OECD (2023), “Regulatory sandboxes in artificial intelligence”, OECD Digital Economy Papers, No. 356, OECD Publishing, Paris, https://doi.org/10.1787/8f80a0e6-en.
[56] OECD (2022), Going Digital to Advance Data Governance for Growth and Well-being, OECD Publishing, Paris, https://doi.org/10.1787/e3d783b0-en.
[71] OECD (2022), “Measuring the environmental impacts of artificial intelligence compute and applications: The AI footprint”, OECD Digital Economy Papers, No. 341, OECD Publishing, Paris, https://doi.org/10.1787/7babf571-en.
[58] OECD (2022), “OECD Framework for the Classification of AI systems”, OECD Digital Economy Papers, No. 323, OECD Publishing, Paris, https://doi.org/10.1787/cb6d9eca-en.
[156] OECD (2022), “OECD Good Practice Principles for Public Service Design and Delivery in the Digital Age”, OECD Public Governance Policy Papers, No. 23, OECD Publishing, Paris, https://doi.org/10.1787/2ade500b-en.
[150] OECD (2022), OECD Guidelines for Citizen Participation Processes, OECD Publishing, https://doi.org/10.1787/f765caf6-en.
[157] OECD (2021), “Achieving cross-border government innovation: Governing cross-border challenges”, OECD Public Governance Policy Papers, No. 10, OECD Publishing, Paris, https://doi.org/10.1787/ddd07e3b-en.
[112] OECD (2021), G20 survey on Agile approaches to the regulatory governance of innovation: Report for the G20 Digital Economy Task Force, Trieste, Italy, August 2021, OECD Publishing, Paris, https://doi.org/10.1787/f161916d-en.
[43] OECD (2021), Good Practice Principles for Data Ethics in the Public Sector - OECD, OECD Publishing, Paris, https://www.oecd.org/gov/digital-government/good-practice-principles-for-data-ethics-in-the-public-sector.htm (accessed on 14 April 2025).
[21] OECD (2021), OECD Report on Public Communication: The Global Context and the Way Forward, OECD Publishing, https://doi.org/10.1787/22f8031c-en.
[9] OECD (2021), Public Sector Innovation Facets: Mission-oriented innovation, OECD Publishing, https://oecd-opsi.org/publications/facets-mission/.
[121] OECD (2021), Recommendation of the Council for Agile Regulatory Governance to Harness Innovation, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0464.
[7] OECD (2021), “The OECD Framework for digital talent and skills in the public sector”, OECD Working Papers on Public Governance, No. 45, OECD Publishing, Paris, https://doi.org/10.1787/4e7c3f58-en.
[149] OECD (2020), Innovative Citizen Participation and New Democratic Institutions: Catching the Deliberative Wave, OECD Publishing, Paris, https://doi.org/10.1787/339306da-en.
[34] OECD (2019), Artificial Intelligence in Society, OECD Publishing, Paris, https://doi.org/10.1787/eedfee77-en.
[59] OECD (2019), “Data governance in the public sector”, in The Path to Becoming a Data-Driven Public Sector, OECD Publishing, Paris, https://doi.org/10.1787/9cada708-en.
[159] OECD (2019), Recommendation of the Council on Public Service Leadership and Capability, OECD Publishing, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0445.
[57] OECD (2019), The Path to Becoming a Data-Driven Public Sector, OECD Digital Government Studies, OECD Publishing, Paris, https://doi.org/10.1787/059814a7-en.
[1] OECD (2017), Systems Approaches to Public Sector Challenges: Working with Change, OECD Publishing, Paris, https://doi.org/10.1787/9789264279865-en.
[37] OECD (forthcoming), Generative AI Experimentation in Government: A review of current practices, OECD Publishing.
[38] OECD (forthcoming), Mapping Relevant Data Collection Mechanisms for AI Training.
[115] OECD.AI (2025), The OECD-African Union AI Dialogue 2.0: From strategy to implementation, https://oecd.ai/en/wonk/the-oecd-african-union-ai-dialogue-2-0-from-strategy-to-implementation.
[30] OECD/CAF (2022), The Strategic and Responsible Use of Artificial Intelligence in the Public Sector of Latin America and the Caribbean, OECD Public Governance Reviews, OECD Publishing, Paris, https://doi.org/10.1787/1f334543-en.
[29] OECD/UNESCO (2024), G7 Toolkit for Artificial Intelligence in the Public Sector, OECD Publishing, Paris, https://doi.org/10.1787/421c1244-en.
[20] Pahlka, J. (2024), AI meets the cascade of rigidity, https://www.niskanencenter.org/ai-meets-the-cascade-of-rigidity/.
[49] Parankusham, K., R. Rizk and K. Santosh (2025), LakotaBERT: A Transformer-based Model for Low Resource Lakota Language, https://arxiv.org/abs/2503.18212.
[145] Pehlivan, C. and E. Valín (2023), Spain establishes the EU’s first AI supervisory agency, https://techinsights.linklaters.com/post/102intj/spain-establishes-the-eus-first-ai-supervisory-agency.
[45] Peixoto, T., O. Canuto and L. Jordan (2024), AI and the Future of Government: Unexpected Effects and Critical Challenges, https://www.policycenter.ma/publications/ai-and-future-government-unexpected-effects-and-critical-challenges.
[39] Personal Information Protection Commission of Korea (2023), Policy direction for safe use of personal information in the era of artificial intelligence [translated], https://www.pipc.go.kr/np/cop/bbs/selectBoardArticle.do?bbsId=BS074&mCode=C020010000&nttId=9083.
[117] Policy Lab Digital, Work & Society within the German Federal Ministry of Labour and Social Affairs (2024), Guidelines for the Use of AI in the Administrative Work of Employment and Social Protection Services, https://www.denkfabrik-bmas.de/fileadmin/Downloads/Publikationen/Guidelines_for_the_use_of_ai_in_the_administrative_work_of_employment_and_social_protection_services.pdf.
[155] Ramos, J. (2018), Laboratorio Para La Ciudad (CDMX), https://actionforesight.net/laboratorio-para-la-ciudad-cdmx/.
[66] Ray, T. (2025), Sovereign remedies: Between AI autonomy and control, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/sovereign-remedies-between-ai-autonomy-and-control/.
[72] Redapt (2023), On-Premises vs. Cloud for AI Workloads, https://www.redapt.com/blog/on-premises-vs-cloud-for-ai-workloads.
[160] Rudra, S. (2024), OSI Calls Out Meta for its Misleading ’Open Source’ AI Models, https://news.itsfoss.com/osi-meta-ai/.
[24] Ryseff, J., B. De Bruhl and S. Newberry (2024), The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed, RAND, https://www.rand.org/pubs/research_reports/RRA2680-1.html.
[25] Ryseff, J. and A. Narayanan (2025), Why AI Projects Fail, https://www.rand.org/pubs/presentations/PTA2680-1.html.
[81] SDAIA (2025), Government Cloud (Deem), https://sdaia.gov.sa/en/Services/Pages/Deem.aspx.
[85] Seger, E. et al. (2024), Open-Sourcing Highly Capable Foundation Models: An Evaluation of Risks, Benefits, and Alternative Methods for Pursuing Open-Source Objectives, https://www.governance.ai/research-paper/open-sourcing-highly-capable-foundation-models.
[91] The Alan Turing Institute (2023), AI Skills for Business Competency Framework, https://www.turing.ac.uk/skills/collaborate/ai-skills-business-framework.
[61] U.S. General Services Administration (2024), AI Capability Maturity - Operational maturity areas, https://coe.gsa.gov/coe/ai-guide-for-government/operational-maturity-areas/index.html#dataops.
[10] UCL IIPP (2019), A Mission-Oriented UK Industrial Strategy, https://www.ucl.ac.uk/bartlett/public-purpose/sites/public-purpose/files/190515_iipp_report_moiis_final_artwork_digital_export.pdf.
[128] UK Committee of Public Accounts (2025), Use of AI in Government, https://committees.parliament.uk/publications/47199/documents/244683/default/.
[147] UK DSIT (2024), Global leaders agree to launch first international network of AI Safety Institutes to boost cooperation of AI, https://www.gov.uk/government/news/global-leaders-agree-to-launch-first-international-network-of-ai-safety-institutes-to-boost-understanding-of-ai.
[27] UK DSIT (2020), Guidelines for AI procurement, https://www.gov.uk/government/publications/guidelines-for-ai-procurement/guidelines-for-ai-procurement.
[6] UK Government (2025), A blueprint for modern digital government, https://www.gov.uk/government/publications/a-blueprint-for-modern-digital-government/a-blueprint-for-modern-digital-government-html.
[5] UK Government (2025), Prime Minister: I will reshape the state to deliver security for working people, https://www.gov.uk/government/news/prime-minister-i-will-reshape-the-state-to-deliver-security-for-working-people.
[22] UK Government Digital Service (2025), Artificial Intelligence Playbook for the UK Government, https://www.gov.uk/government/publications/ai-playbook-for-the-uk-government/artificial-intelligence-playbook-for-the-uk-government-html.
[98] UK House of Commons (2024), Governance of Artificial Intelligence (AI): Government Response, https://committees.parliament.uk/publications/46145/documents/230927/default/.
[16] UK NAO (2024), Use of articial intelligence in government, https://www.nao.org.uk/wp-content/uploads/2024/03/use-of-artificial-intelligence-in-government.pdf.
[111] UNESCO (2024), Consultation paper on AI regulation: emerging approaches across the world, https://unesdoc.unesco.org/ark:/48223/pf0000390979.
[113] UNESCO (2023), Ethical impact assessment. A tool of the Recommendation on the Ethics of Artificial Intelligence, UNESCO, https://doi.org/10.54678/ytsa7796.
[89] University of Rome Sapienza (2024), AI made in Italy: here is Minerva, the first family of large language models trained “from scratch” for Italian, https://www.uniroma1.it/en/notizia/ai-made-italy-here-minerva-first-family-large-language-models-trained-scratch-italian (accessed on 10 March 2025).
[28] US IT Modernization Centers of Excellence (n.d.), AI Guide for Government, https://coe.gsa.gov/ai-guide-for-government.
[17] US OMB (2025), Accelerating Federal Use of AI through Innovation, Governance, and Public Trust, https://www.whitehouse.gov/wp-content/uploads/2025/02/M-25-21-Accelerating-Federal-Use-of-AI-through-Innovation-Governance-and-Public-Trust.pdf.
[105] US OMB (2025), M-25-22 Driving Efficient Acquisition of Artificial Intelligence in Government, White House Office of Management and Budget, https://www.whitehouse.gov/wp-content/uploads/2025/02/M-25-22-Driving-Efficient-Acquisition-of-Artificial-Intelligence-in-Government.pdf.
[131] Valderrama, M., M. Hermosilla and R. Garrido (2023), State of the Evidence: Algorithmic Transparency, https://www.opengovpartnership.org/wp-content/uploads/2023/05/State-of-the-Evidence-Algorithmic-Transparency.pdf (accessed on August 2024).
[95] van Noordt, C., R. Medaglia and L. Tangi (2023), “Policy initiatives for Artificial Intelligence-enabled government: An analysis of national strategies in Europe”, Public Policy and Administration, https://doi.org/10.1177/09520767231198411.
[153] Vergne, A. and A. Siu (2025), Global Coalition for an Inclusive AI, https://global-ai-dialogue.org/.
[33] Verhulst, S. and M. Sloane (2020), Realizing the Potential of AI Localism, https://www.project-syndicate.org/commentary/local-regulation-of-artificial-intelligence-uses-by-stefaan-g-verhulst-1-and-mona-sloane-2020-02?barrier=accesspaylog.
[11] Vinnova (2022), Designing missions, https://www.vinnova.se/contentassets/1c94a5c2f72c41cb9e651827f29edc14/designing-missions.pdf?cb=20220311094952.
[32] WAM (2024), Hamdan bin Mohammed appoints 22 Chief AI Officers across government entities in Dubai, https://www.wam.ae/en/article/b3kujwp-hamdan-bin-mohammed-appoints-chief-officers-across.
[123] Werner, J. (2024), New York Governor Signs AI Oversight Bill, https://babl.ai/new-york-governor-signs-ai-oversight-bill.
[107] World Bank (2025), Global Trends in AI Governance: Evolving Country Approaches, https://openknowledge.worldbank.org/entities/publication/a570d81a-0b48-4cac-a3d9-73dff48a8f1a.
[104] World Economic Forum (2025), AI Procurement Guideline, https://www.weforum.org/publications/ai-procurement-in-a-box/ai-government-procurement-guidelines/ (accessed on 10 March 2025).
Notes
Copy link to Notes← 2. See https://oecd-opsi.org/work-areas/anticipatory-innovation and https://www.oecd.org/en/about/programmes/strategic-foresight.
← 3. The context and use of “enablers” in this report are not the same as the “AI enablers” for AI systems generally discussed by AI policy and technical communities, which consist of data, algorithms and computational power (“compute”).
← 4. See https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023, https://www.gov.uk/government/publications/seoul-declaration-for-safe-innovative-and-inclusive-ai-ai-seoul-summit-2024, and https://www.elysee.fr/en/emmanuel-macron/2025/02/11/statement-on-inclusive-and-sustainable-artificial-intelligence-for-people-and-the-planet, respectively.
← 5. See (OECD, 2021[7]) for additional relevant material, including skills and competencies for digital government leadership. See also the OECD (2019[159]) Recommendation on Public Service Leadership and Capability for information on how countries can instil values-driven culture and leadership, and ensure skilled and effective public servants, and responsive and adaptive public employment systems.
← 6. The OECD is engaged in supporting mission-oriented innovations through its Mission Action Lab, a joint initiative from the OECD Directorate for Science, Technology and Innovation (STI), Observatory of Public Sector Innovation (OPSI) in the Public Governance Directorate (GOV), and the Development Co-operation Directorate (DCD). See https://oecd-missions.org.
← 7. Behavioural science is an interdisciplinary approach that encompasses the study of human behaviour and the design of strategies to change it. See https://www.oecd.org/en/topics/behavioural-science.
← 8. See https://www.apsc.gov.au/initiatives-and-programs/workforce-information/research-analysis-and-publications/state-service/state-service-report-2023-24/fit-future/supporting-safe-and-responsible-use-artificial-intelligence.
← 9. See https://www.digmin.dk/digitalisering/mere-om-digitalisering/digital-taskforce-for-kunstig-intelligens-.
← 10. See https://www.mitre.org/news-insights/fact-sheet/federal-ai-sandbox and https://www.mitre.org/news-insights/news-release/mitre-establish-new-ai-experimentation-and-prototyping-capability-us.
← 13. See https://govtech.justica.gov.pt/en/govtech-justica-english and https://www.ceei.es/legal&justiciatechlab/?r=vxpx6w6j70qr1jstvc6, respectively.
← 14. Details provided in Spain’s 2024 AI strategy, https://avance.digital.gob.es/es-es/notasprensa/paginas/20240514-gobierno-aprueba-estrategia-ia-2024.aspx.
← 15. https://www.gov.uk/government/publications/the-magenta-book/guidance-on-the-impact-evaluation-of-ai-interventions-html.
← 16. See https://www.gov.uk/guidance/repository-of-privacy-enhancing-technologies-pets-use-cases for a repository of use cases from different countries assembled by the UK.
← 17. Other countries include Brazil, Canada, China, Egypt, Estonia, Finland, Germany, France, Hungary, Iceland, India, Israel, Japan, Korea, Latvia, Norway, Qatar, Singapore, Slovenia, South Africa, Spain, Thailand, Türkiye, the UK, and Viet Nam.
← 18. See also https://www.ekt.gr/en/news/30774.
← 19. See https://www.athenarc.gr/el/news/meltemi-proto-anoihto-megalo-glossiko-montelo-gia-ta-ellinika and https://www.athenarc.gr/en/news/llama-krikri-new-greek-ai-language-model-featured-kathimerini, respectively.
← 20. See https://static.pib.gov.in/WriteReadData/specificdocs/documents/2022/aug/doc202282696201.pdf and https://www.indiatoday.in/technology/news/story/bhashini-ceo-amitabh-nag-talks-about-how-their-ai-tool-is-bridging-indias-language-divide-2646101-2024-12-06; information supplemented by Government of India officials.
← 21. See https://interoperable-europe.ec.europa.eu/collection/open-source-observatory-osor/news/spanish-authorities-release-alia-ai-models.
← 22. https://www.canada.ca/en/treasury-board-secretariat/corporate/reports/2023-2026-data-strategy.html.
← 23. See https://www.gov.uk/guidance/national-data-strategy and https://www.dta.gov.au/digital-government-strategy, respectively.
← 25. See https://sbci.gov.ie/information-access/data-sharing-and-governance-act, https://eur-lex.europa.eu/eli/reg/2016/679/oj, https://eur-lex.europa.eu/eli/reg/2023/2854, https://eur-lex.europa.eu/eli/reg/2022/868/oj, https://eur-lex.europa.eu/eli/dir/2019/1024/oj, and https://ec.europa.eu/isa2/eif_en, respectively.
← 26. Digital Public Infrastructure (DPI) is defined as shared digital systems that are secure and interoperable and that can support the inclusive delivery of and access to public and private services across society. Such systems act as common digital building blocks that underpin government processes and services and enable digital government transformation at societal scale (OECD, 2024[62]).
← 28. Model weights are “The variables or numerical values used to specify how the input (e.g. text describing an image) is transformed into the output (e.g. the image itself). These are iteratively updated during model training to improve the model’s performance on the tasks for which it is trained” (Seger et al., 2024[85]). The use of “open-source” models for this report does not imply that such models are released under an open-source license approved by the Open Source Initiative (OSI), a nonprofit steward of the Open Source Definition (https://opensource.org/osd). OSI has criticised some companies that call their models open source because they only provide the weights for the model, and not other elements, such as the training data, code and training practices (Rudra, 2024[160]). Some argue that such models should be called “open weight” instead of “open source”.
← 30. https://www.tech.gov.sg/products-and-services/for-government-agencies/productivity-and-marketing/vica.
← 32. https://indiaai.s3.ap-south-1.amazonaws.com/docs/empowering-public-sector-leadership.pdf. Information supplemented by Government of India officials.
← 33. See https://www.ipa.ie/ipa-overview/onelearning.2548.html and https://www.ypes.gr/ypourgeio-esoterikon-google-enarxi-epimorfotikis-drasis-gia-dimosious-ypallilous-me-thema-tin-techniti-noimosyni, respectively.
← 35. See https://www.canada.ca/en/government/system/digital-government/digital-talent-strategy.html.
← 36. See https://chcoc.gov/content/skills-based-hiring-guidance-and-competency-model-artificial-intelligence-work.
← 42. See https://www.cio.bund.de/Webs/CIO/DE/digitale-loesungen/datenpolitik/daten-und-ki/daten-und-ki-node.html. and https://www.digitale-verwaltung.de/SharedDocs/downloads/Webs/DV/DE/Transformation/akteurssteckbrief-beki.pdf.
← 43. See https://www.digital.gov.au/policy/ai/pilot-ai-assurance-framework, https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-principles/australias-ai-ethics-principles and https://www.digital.gov.au/policy/ai, respectively.
← 44. See https://www.gsa.gov/about-us/newsroom/news-releases/technology-modernization-fund-seeking-proposals-fo-02082024.
← 45. https://www.numerique.gouv.fr/services/guichet-financement-exploitation-valorisation-des-donnees/
← 47. https://public-buyers-community.ec.europa.eu/communities/procurement-ai/resources/eu-model-contractual-ai-clauses-pilot-procurements-ai
← 49. See also https://www.chilecompra.cl/2024/11/goblab-uai-presento-nueva-herramienta-para-una-ia-responsable-y-etica.
← 51. See https://www.elysee.fr/en/emmanuel-macron/2025/02/11/statement-on-inclusive-and-sustainable-artificial-intelligence-for-people-and-the-planet and https://www.elysee.fr/en/sommet-pour-l-action-sur-l-ia/public-interest-ai.
← 53. See https://www.rtu.lv/en/university/for-mass-media/news/open/latvia-establishes-artificial-intelligence-centre and https://digital-skills-jobs.europa.eu/en/latest/news/latvia-establishes-artificial-intelligence-centre.
← 54. See https://oecd.ai/ai-principles, https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence, https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai, and https://digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-guiding-principles-advanced-ai-system, respectively.
← 56. See https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023, https://www.gov.uk/government/publications/seoul-declaration-for-safe-innovative-and-inclusive-ai-ai-seoul-summit-2024 and https://www.elysee.fr/en/emmanuel-macron/2025/02/11/statement-on-inclusive-and-sustainable-artificial-intelligence-for-people-and-the-planet, respectively.
← 58. See also https://oecd.ai/en/wonk/how-the-oecd-ai-policy-observatory-has-shaped-colombia-and-latin-americas-approach-to-ai-policy.
← 60. https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/guide-use-generative-ai.html.
← 61. https://www.cio.bund.de/Webs/CIO/DE/digitale-loesungen/kuenstliche_intelligenz/kuenstliche_intelligenz-node.html.
← 62. https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper.
← 63. See https://hyscaler.com/insights/bahrain-pioneers-ai-regulation and https://www.ita.gov.om/itaportal/Data/SiteImgGallery/2024731125545486/National%20Artificial%20Intelligence%20Policy.pdf, respectively.
← 64. https://www.whitehouse.gov/wp-content/uploads/2025/02/M-25-21-Accelerating-Federal-Use-of-AI-through-Innovation-Governance-and-Public-Trust.pdf.
← 65. https://research-data.urosario.edu.co/file.xhtml?persistentId=doi:10.34848/YN1CRT/8OHRT0&version=1.0.
← 67. See https://github.com/ombegov/2024-Federal-AI-Use-Case-Inventory for a centralised consolidation of AI use case inventories from across US federal government agencies.
← 68. See https://www.algoritmospublicos.cl/repositorio, https://odap.fr/inventaire, and https://algoritmes.overheid.nl, respectively.
← 69. See https://algoritmeregister.amsterdam.nl/en/ai-register and https://ai.hel.fi/en/ai-register, respectively.
← 70. https://stip.oecd.org/stip/interactive-dashboards/policy-initiatives/2023%2Fdata%2FpolicyInitiatives%2F2329.
← 72. https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592.
← 73. For useful context, the results of the Global Right to Information (RTI) Rating assesses the formal legal framework for the right to information for each country against a variety of categories (e.g. scope of applicability, requesting procedure, appeals process). See https://www.rti-rating.org/country-data.
← 74. See also the work of the of the OECD.AI Expert Group on AI Risk & Accountability (https://oecd.ai/site/risk-accountability).
← 75. See https://www.coe.int/en/web/portal/-/huderia-new-tool-to-assess-the-impact-of-ai-systems-on-human-rights.
← 76. See, for example, coverage of the topic in (OECD/CAF, 2022[30]).
← 77. See https://www.nist.gov/news-events/news/2024/05/nist-launches-aria-new-program-advance-sociotechnical-testing-and.
← 78. https://www.forgov.qld.gov.au/information-and-communication-technology/qgea-directions-and-guidance/qgea-policies-standards-and-guidelines/foundational-artificial-intelligence-risk-assessment-guideline.
← 79. The OECD is further exploring the concept of AI risk thresholds, as demonstrated by a September 2024 public consultation on the topic (https://oecd.ai/wonk/seeking-your-views-public-consultation-on-risk-thresholds-for-advanced-ai-systems-deadline-10-september).
← 80. See https://www.gov.uk/government/publications/emerging-processes-for-frontier-ai-safety/emerging-processes-for-frontier-ai-safety#responsible-capability-scaling.
← 81. Supreme Audit Institutions (SAIs) are public bodies responsible for the audit of government revenue and expenditure. By scrutinizing public financial management and reporting they provide assurance that resources are used as prescribed. See https://sirc.idi.no/about/what-are-sais for additional details.
← 85. See, for instance, https://www.aph.gov.au/Parliamentary_Business/Committees/Joint/Public_Accounts_and_Audit/PublicsectoruseofAI/Report and https://committees.parliament.uk/work/6986/governance-of-artificial-intelligence-ai, respectively.
← 89. https://www.wa.gov.au/organisation/department-of-the-premier-and-cabinet/office-of-digital-government/western-australian-artificial-intelligence-advisory-board.
← 90. See https://ised-isde.canada.ca/site/ised/en/canadian-artificial-intelligence-safety-institute, https://digital-strategy.ec.europa.eu/en/policies/ai-office, https://www.economie.gouv.fr/actualites/la-france-se-dote-dun-institut-national-pour-levaluation-et-la-securite-de-lintelligence, https://aisi.go.jp, https://www.aisi.re.kr, https://t.ly/vCtd1, https://www.gov.uk/government/publications/ai-safety-institute-overview, and https://www.nist.gov/aisi, respectively.
← 91. https://indiaai.gov.in/article/india-takes-the-lead-establishing-the-indiaai-safety-institute-for-responsible-ai-innovation. Information supplemented by Government of India officials.
← 92. https://www.gov.uk/government/publications/public-attitudes-to-data-and-ai-tracker-survey-wave-4/public-attitudes-to-data-and-ai-tracker-survey-wave-4-report.
← 93. https://www.etalab.gouv.fr/ia-decouvrez-et-participez-au-projet-piaf-pour-des-ia-francophones.
← 94. https://ecastnetwork.org and https://issues.org/thinking-like-citizen-participatory-technology-assessment-weller-govani-farooque.
← 95. https://digital-strategy.ec.europa.eu/en/consultations/ai-act-have-your-say-trustworthy-general-purpose-ai.
← 96. https://digital-strategy.ec.europa.eu/en/news/ai-act-participate-drawing-first-general-purpose-ai-code-practice.
← 97. The OECD has issued a series of reports on “Achieving Cross-Border Government Innovation”, which touch on challenges and success cases in a variety of areas, including AI. See https://cross-border.oecd-opsi.org.