This chapter synthesises 200 AI use cases across 11 government functions. It finds using AI is a priority for governments, but adoption is fragmented and uneven. Use concentrates on public facing services and internal operations, with fewer examples in policymaking. Governments pursue productivity, responsiveness and accountability, yet efforts to empower external actors are limited. Maturity varies by function and technology, with long-standing rules-based systems, selective machine learning and limited generative AI. Every use case can pose operational, ethical, resistance or exclusion risks if not trustworthy, underscoring the need for strong data foundations and coherent governance.
Governing with Artificial Intelligence
2. Trends and early lessons from the use of AI across functions of government
Copy link to 2. Trends and early lessons from the use of AI across functions of governmentAbstract
Key messages
Copy link to Key messagesThe OECD analysed 200 use cases across 11 government functions. It found that while AI is a priority for most governments, efforts are not systematic.
This analysis helped to better understand the current state of AI in government and to identify overarching trends. These trends are that:
AI is unevenly distributed across government functions
AI is most used for public-facing public service activities and internal operations
Governments are using AI to pursue a variety of potential benefits
Some government functions are more mature regarding governing and adopting AI
Different functions of government have different contexts and needs.
The OECD found that every one of the 200 use cases analysed could post one or more types of risk (operational, ethical, public resistance or exclusion) if not designed and used in a trustworthy way.
These risks vary across use cases and functions of government; therefore, it is important to acknowledge the main drivers of risks in each use and field.
While governments are vigilant of several AI risks, some receive less focus.
OECD analysis of 200 use cases across 11 government functions
Copy link to OECD analysis of 200 use cases across 11 government functionsIn its latest cross-cutting work on AI in government, the OECD (2024[1]) found the need for the systematic collection, documentation and analysis of AI use cases to monitor trends on policy options across countries. The OECD also found that more and better evidence of the impact of AI on governments will help ensure the technology is used for optimal impact. Ease of access to such evidence as well as information on policies, practice and use of AI in government could promote progress in trustworthy AI adoption, structured dialogue and exchanges among countries. Overall, there is a need for a holistic, systems approach to maximise the value of AI in government, including establishing enablers, guardrails and engagement mechanisms.
To help address these needs, this chapter considers and builds upon OECD and other relevant research to better understand the current state of AI in government and to illuminate overarching trends. This chapter analyses and synthesises 200 AI use cases spanning 11 government functions, as listed in Table 2.1 and discussed in-depth in Chapter 5.1 These use cases were identified through, and the findings discussed in this chapter informed by, desktop research, OECD meetings and discussions with public officials in relevant OECD working parties and networks, and ongoing data collections from the OECD Observatory of Public Sector Innovation (OPSI) and the OECD.AI Policy Observatory.2
Based on this methodology, the findings in this chapter are not generalisable to the broader universe of AI efforts in government. In addition, adoption of AI in government will vary across countries, depending on their national realities and levels of AI readiness. The findings do provide, however, observations rooted in real-world practice, the latest research and policymakers’ present points of view. In doing so, the chapter seeks to take the pulse of current activities and their characteristics, as well as potential gaps where there may be untapped potential or need for further research.
In the coming months, the OECD will establish a living global repository of relevant initiatives and case studies as part of the OECD.AI Policy Observatory.
Table 2.1. Functions of government analysed for Governing with AI
Copy link to Table 2.1. Functions of government analysed for <em>Governing with AI</em>|
Category |
Function |
Scope of analysis |
|---|---|---|
|
Government policy functions |
Tax administration |
OECD experts in each function of government leveraged OECD and external research and analysed 200 use cases to determine:
|
|
Public financial management |
||
|
Regulatory design and delivery |
||
|
Key government processes |
Civil service reform |
|
|
Public procurement |
||
|
Fighting corruption and promoting public integrity |
||
|
Policy Evaluation |
||
|
Civic participation and open government |
||
|
Government services and justice functions |
Public service design and delivery |
|
|
Law enforcement and disaster risk management |
||
|
Justice administration and access to justice |
AI is a priority, but efforts are not systemic
Copy link to AI is a priority, but efforts are not systemicIn all, 48 countries and the European Union (EU) have adhered to the OECD AI Principles (Table 2.2), committing to promoting the trustworthy design, development, deployment and use of AI, including in the public sector. The OECD is tracking and reporting on the implementation of these principles over time (2023[2]; 2021[3]). Findings suggest that government are less focused on using AI than on their efforts to promote trustworthy AI adoption in the broader economy and society.
Table 2.2. OECD AI Principles
Copy link to Table 2.2. OECD AI Principles|
Principle |
Description |
|
|---|---|---|
|
Value-based principles |
Inclusive growth, sustainable development and well-being (Principle 1.1) |
Highlights the potential for trustworthy AI to contribute to overall growth and prosperity for all – individuals, society, and planet – and advance global development objectives. |
|
Respect for the rule of law, human rights and democratic values, including fairness and privacy (Principle 1.2) |
AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and should include appropriate safeguards to ensure a fair and just society. |
|
|
Transparency and explainability (Principle 1.3) |
About transparency and responsible disclosure around AI systems to ensure that people understand when they are engaging with them and can challenge outcomes. |
|
|
Robustness, security and safety (Principle 1.4) |
AI systems should function in a robust, secure and safe way throughout their lifetimes, and potential risks should be continually assessed and managed. |
|
|
Accountability (Principle 1.5) |
Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the OECD’s values-based principles for AI. |
|
|
Recommendations for policymakers |
Investing in AI research and development (Principle 2.1) |
Governments should facilitate public and private investment in research & development to spur innovation in trustworthy AI. |
|
Fostering an inclusive AI-enabling ecosystem (Principle 2.2) |
Governments should foster accessible AI ecosystems with digital infrastructure and technologies, and mechanisms to share data and knowledge as well as ensure the quality of such information. |
|
|
Shaping and enabling interoperable governance and policy environment for AI (Principle 2.3) |
Governments should create a policy environment that will open the way to deployment of trustworthy AI systems. |
|
|
Building human capacity and preparing for labour market transformation (Principle 2.4) |
Governments should equip people with the skills for AI and support workers to ensure a fair transition. |
|
|
International co-operation for trustworthy AI (Principle 2.5) |
Governments should co-operate across borders and sectors to share information, develop standards and work towards responsible stewardship of AI. |
Source: https://oecd.ai/en/ai-principles.
Governments are realising the potential of AI in public administrations and making it a strategic priority. Almost all OECD countries have put in place strategies and agendas for AI that establish a high-level vision and approach for its use in government. These are mainly embedded in broader national AI strategies. However, countries like Canada (2025[4]), Switzerland (2025[5]) and Uruguay (2021[6]) have developed dedicated strategies. Many governments have also sought to convert strategy into practice through cross-cutting or domain-specific policies and initiatives. Such efforts are discussed further in Chapter 4. In addition, many governments have adopted AI through hands-on use and, increasingly, custom development.
Progress has been made since the OECD began exploring AI in government in 2019; however, efforts to date are limited and not systematic. The potential opportunities of AI in government are significant but not easy to attain. Governments in two-thirds of OECD countries have started to explore the use of AI for internal efficiency by enhancing processes. Yet more progress is needed not only in using AI for other purposes, such as improving policies, but also in building the foundational components needed for AI in government to flourish (OECD, 2024[1]). In addition, the review of use cases suggests a proliferation of AI tools, with implementation often occurring in a piecemeal manner. These efforts are frequently undertaken without overarching governance mechanisms to steer initiatives across sectors or government as a whole, or to draw lessons from their implementation. As a result, the potential for coordinated learning, scaling and impact remains limited. Establishing robust governance frameworks could help ensure AI systems are deployed in a cohesive, efficient and accountable way, aligned with strategic priorities and public values.
The sections below seek to uncover the current status of AI use in government, including key patterns and trends among early adopters. In doing so, these sections seek to identify the extent to which governments extend beyond principles to take action in using AI, what results and outcomes they are achieving and what limitations they face.
General trends in government AI use cases
Copy link to General trends in government AI use casesUneven distribution of AI use across government functions
The 2023 OECD Digital Government Index (DGI) ([7]) found that while some countries have deployed a wide range of initiatives to enhance their capacity to use AI in government, implementation is still a challenge across most countries. In digging deeper into the use cases analysed for this report, OECD analysis suggests government AI efforts may cluster around public service design and delivery, civic participation and open government, and justice administration and access to justice. Conversely, only few initiatives in functions such as policy evaluation were identified (Figure 2.1).
Figure 2.1. Use cases are most present in public service, civic participation and justice functions
Copy link to Figure 2.1. Use cases are most present in public service, civic participation and justice functions
Source: OECD analysis of identified use cases.
There are several potential explanations for this distribution:
Public service design and delivery extends horizontally across many different types of organisations and issue areas, making it more prevalent in terms of total use cases than more vertical functions of government, such as tax administration.
Civic participation and open government’s prevalence may be partially because it is unencumbered by many of the risks (Chapter 1) and challenges (Chapter 3) faced by other functions. For instance, concerns around data access and security are largely non-applicable because the point of such engagement is generally to gather data on issues and questions that are public by nature. In addition, government teams engaging in civic participation are often among the more innovative groups in the public service, and thus, perhaps more prone to embracing new technological approaches.
The policy functions most represented tend to be public facing, potentially suggesting a focus on areas with immediate visibility to citizens. Factors comprising this could be more demands from citizens, as well as a desire among government and political leaders to visibly demonstrate value.
Some functions face barriers or complexities, such as stringent rules on data access and sharing in tax administration and requirements for thorough audit trails in public integrity.
Some functions appear to be more mature than others pertaining to AI readiness, including their underlying foundations for AI, such as sufficient and quality data, as discussed below.
Some functions may have pre-existing structures and processes that cannot be easily substituted or complemented by AI systems.
The prevalence of AI use in justice and the related function of law enforcement is particularly interesting. In general, the OECD (2024[1]) has encouraged governments to aim for low-hanging fruit in their initiation of AI — focusing on areas that represent high-benefit, low-risk uses of AI. The use of AI in justice and law enforcement can be high benefit but also high risk. One reason for the prevalence of use cases in some of these areas may be the prevalence of more comprehensive and structured data. Another reason may align with the sheer volume of tasks required for some of these functions. In the case of justice administration and access to justice, justice systems worldwide often operate under tight resource constraints, including limited budget and court staff, even as the volume of cases continues to grow (Harvard Kennedy School, 2023[8]; Columbia University, 2020[9]). This mismatch has led to chronic case backlogs in many jurisdictions, creating intense pressure on court administrators to explore technologies that can boost productivity and mitigate the backlog problem. This might be reflected in the higher number of AI use cases in justice linked to internal operations, compared to other government functions (Figure 2.3).
An additional reason for this distribution may be the data available for review, with the OECD tending to identify, or governments tending to submit, information on initiatives in some functions more than others. This seems somewhat tempered by validation in OECD discussions and reviews by the OECD Working Party on Senior Digital Government Officials (E-Leaders), as well comparisons with results from larger databases. Regarding the latter, the data collection for this report aligns with the trends seen in the EU and Latin America and the Caribbean (LAC), as recorded by the European Commission (EC) Public Sector Tech Watch observatory (2025[10]) and the “AI Systems in the Public Sector in LAC” database (Muñoz-Cadena et al., 2025[11]) (Figure 2.2). In the EU, the top three functions identified in their repository of nearly 1 500 AI uses, as of 31 March 2025, are general public services, economic affairs, and public order and safety.3 The public order and safety category includes use cases in both law enforcement and justice administration. The economic affairs category is mainly represented by use cases in sectors like transport, agriculture and energy. About 70% of economic affairs AI use cases are related to regulation and targeted public services and engagement. LAC exhibits a comparable trend in the top three functions within the reviewed repository of approximately 700 AI systems, as updated on 19 March 2025.
Figure 2.2. EU and LAC follow a similar trend with the AI use cases sample collected for this report
Copy link to Figure 2.2. EU and LAC follow a similar trend with the AI use cases sample collected for this reportPercentage of use cases categorised according to the United Nation’s Classification of the Functions of Government
A final factor that may also influence the results is a variation on the “AI effect”, whereby “as soon as [AI] researchers achieve a milestone long thought to signify the achievement of true artificial intelligence, e.g. beating a human at chess, it suddenly gets downgraded to not true AI” (Bailey, 2016[12]). In discussions with the E-Leaders working party, delegates have suggested that narrow and traditional applications of AI may have become so integrated or commonplace, they no longer trigger external reporting or a response to data collection efforts. This could potentially occur more in areas with longstanding use of such systems, such as tax administration, resulting in less representation in examined initiatives. It is difficult to determine the extent to which this may occur; however, the analysis for this report did identify and include many such use cases.
AI is most used for public-facing public service activities and internal operations
Across the 11 core functions covered by this report, governments are using AI in four general activities: public-facing service delivery, internal operations, internal and external oversight activities and assisting policymaking. The use of AI is most common in public-facing service delivery. Activities related to internal operations are not far behind (Figure 2.3). Internal and external oversight activities and assisting policymaking were not as prevalent. This is not unexpected, as service delivery and internal operations constitute the majority of what government organisations do. Oversight activities are important, though they are more often limited to certain offices or teams. AI use in policymaking activities is also not as prevalent throughout governments. This finding aligns with previous measurements by the OECD DGI (2024[7]), which also indicated that governments could make more effort in this area. Many remain cautious or lack the necessary skills to incorporate AI into decision-making processes.
AI’s prevalence in government activities varies according to the nature of each core function. In the function of public service design and delivery, most AI applications naturally involve public-facing government-citizen interactions, with some use cases also addressing internal operations regarding how public services are designed or delivered. The significant number of service-related use cases in the justice sector, comprising almost three quarters (16 of 25 cases, noting that one case may address more than one activity) of the documented instances in that function, indicates that this function has prioritised responsiveness to citizens, along with enhancing efficiency in its internal operations. This focus may be influenced by greater societal demands, pressure to reduce case backlogs and manage scarce resources. Functions such as civic participation and regulatory design and delivery encompass most of the use cases related to policymaking activities. Related use cases include supporting the processing of evidence and stakeholder inputs for policy formulation, and various applications aiding decision making through analytics, simulations or forecasting.
Figure 2.3. AI use cases per core function and government activity
Copy link to Figure 2.3. AI use cases per core function and government activity
Note: The four activities in this figure are not mutually exclusive (e.g. one AI use case could seek to both improve internal operations and service delivery). Thus, the sum of activities is greater than the total number of use cases.
Source: OECD analysis of identified use cases.
These results are generally in line with findings from the EU (2025[10]) and LAC (2025[11]) databases. Although there is not a uniform methodology for categorising government activities, these and OECD’s databases do tend to show a common trend, particularly around services. Both the EU and LAC databases include a classification of use cases under government activities (Figure 2.4), where “public services and engagement” represent a significant share of all use cases, being the most prominent in EU and the second most in LAC. However, in LAC, most use cases are categorised as part of “enforcement” activities, which include predictive enforcement, registration and data notarisation, smart recognition, supporting inspections, among others. These processes would generally coincide with OECD’s internal operations and oversight (internal and external) categories. This means that the “internal management” category in EU and LAC, which is generally less representative for these databases, is not the only one containing uses cases that the OECD could categorise as “internal operations”. Therefore, it is not possible to conclude whether the OECD dataset contains a higher share of use cases belonging internal operations of government, compared to EU and LAC. Finally, it is worth highlighting that processes under the “analysis, monitoring, and regulatory research” category in EU and LAC generally match with OECD’s “policymaking” category and have a similar share (about 20-30% of all use cases).
Figure 2.4. Public services and engagement represent an important share of use cases under government processes in the EU and LAC
Copy link to Figure 2.4. Public services and engagement represent an important share of use cases under government processes in the EU and LACPercentage of use cases categorised by how the technology supports government decision-making and implementation
Governments are using AI in pursuit of a variety of potential benefits
The use cases analysed by the OECD have the potential to address all the AI benefits introduced in Chapter 1, and most use cases have the potential to yield multiple benefits. However, certain benefits receive stronger focus from governments than others (Figure 2.5):
About six out of every 10 of the examined use cases seek to contribute to the automation, streamlining and tailoring and personalisation of processes and services, particularly within justice, public services, civic participation and regulatory design and delivery.
Nearly half of all use cases seek to enhance decision-making, sense-making and forecasting, with most concentrated in public services, regulation and civic participation.
About a third of the use cases have the potential to improve accountability and anomaly detection, mainly within law enforcement and disaster risk management, civic participation, fighting corruption and promoting public integrity, public procurement and regulation.
A small proportion of use cases have the potential to unlock opportunities for external stakeholders, such as citizens, civil society organisations and businesses, especially in civic participation, access to justice and disaster risk management.
The lack of emphasis on unlocking opportunities for external stakeholders through AI as a good for all stands out as a potential gap. AI experts have suggested that this type of empowerment is important and that governments could take more action to seize it (OECD, 2024[13]). However, such efforts could potentially be more prevalent in areas not covered by this report (e.g. specific sectors, such as agriculture or education). While the results of this benefit may be less directly felt by governments, they can pay dividends through strengthened trust in government or even economic gains.
Figure 2.5. Potential benefits of AI use cases across functions of government
Copy link to Figure 2.5. Potential benefits of AI use cases across functions of governmentPercentage of use cases for the corresponding function of government
Note: The potential benefits in this figure are not mutually exclusive (i.e. one use case may have the potential to yield more than one type of benefit). Thus, the sum of potential benefits observed is greater than the total number of use cases.
Source: OECD analysis of identified use cases.
When examining the specific benefits within the four general activity categories mentioned above, more detailed insights into the direct gains governments aim to achieve through the use of AI can be ascertained (as shown in Figure 2.6). The sections below further detail these benefits and provide examples of some of the use cases that informed such trends.
Figure 2.6. Specific benefits of AI use cases
Copy link to Figure 2.6. Specific benefits of AI use cases
Note: The potential benefits in this figure are not mutually exclusive (i.e. one use case may have the potential to yield more than one type of benefit). Thus, the sum of potential benefits observed is greater than the total number of use cases.
Source: OECD analysis of identified use cases.
Automated, streamlined and tailored processes and services
About a third (31%) of the analysed use cases aim to improve productivity in analytical tasks. To a lesser extent, 15% of use cases represent government efforts to use AI to tailor services to address personalised citizen needs. This relatively lower adoption of AI for personalisation could be partly attributed to data governance limitations or restrictions due to the large volume of personal data required for such applications (see Chapter 3 for discussion on implementation challenges). This appears to be a gap warranting further analysis. Interestingly, the automation of mundane tasks comes in at 9% of the analysed use cases. This is contrary to conventional expectations of AI primarily being used for automating repetitive tasks that require little analytical consideration (Figure 2.6). While a case review methodology is not fully generalisable to the universe of AI in government, this suggests a potential shift of AI's use towards enhancing more complex decision-making processes and supporting more specialised work of public servants and policymakers. Yet, it could also suggest governments are not fully capitalising on AI with respect to repetitive tasks, which public servants spend a significant amount of time on, and for which tremendous efficiencies can be made through AI (The Alan Turing Institute, 2024[14]; Berryhill et al., 2019[15]).
Table 2.3 provides examples of how AI is being used for these purposes. Use cases intended to improve productivity in analytical tasks include uses like estimating compliance costs in regulatory impact assessments (Germany), analysing and scoring candidates' recorded responses in certain recruitment processes (United Kingdom), or supporting government staff with common procurement queries (North Carolina, United States). Automation in repetitive tasks that require less intellectual engagement encompasses various domains, including repetitive judicial tasks or financial and HR processes. This is the case of Prometea in Argentina, the AI Litigation Project in Brazil or Finland’s use of RPA and AI in financial management. Uses aimed at tailoring services and personalisation can be seen in functions such as public services, tax, regulation or justice. For example, the Public Employment Service in Sweden uses BÄR to tailor job-finding support, optimising resource allocation through personalised training and guidance recommendations. In the case of Singapore, the tax authority developed a chatbot to enhance self-service by assisting taxpayers with inquiries and payments. Finally, AI is being used to strengthen civil service hiring and professional development programmes, such as the Australian Public Service Commission trial to use AI to expedite the design, structuring and deployment of digital skills training; or the Spanish National Institute for Public Administration use to transform how civil servants access and use learning resources by improving searchability and recommendations of relevant materials.
Table 2.3. Examples of AI for automated, streamlined and tailored processes and services
Copy link to Table 2.3. Examples of AI for automated, streamlined and tailored processes and services|
Country |
Initiative |
Description |
Sub-benefit |
Function |
|---|---|---|---|---|
|
Argentina |
Prometea and ChatGPT in the justice sector |
The Public Prosecution Service of Buenos Aires adopted Prometea in 2017 to automate repetitive judicial tasks and expedite case proceedings. In 2024, it began to also is exploring explore the use of ChatGPT to analyse legal cases and draft decisions. This AI tool reduces sentencing drafting time from an hour to 10 minutes, increasing efficiency in case management. |
Automate mundane (and recently, analytical) tasks |
Justice (Box 5.62) |
|
Australia |
AI to generate an online learning |
The Australian Public Service Commission (APSC) trialled accelerating course creation for public servants by using AI to design, structure and deploy digital skills training in minutes rather than weeks. The pilot trialled feeding in controlled materials to generate course outlines and quizzes and refine content through feedback loops. |
Tailored approaches to strengthen the civil service |
Civil service reform (Box 5.22) |
|
Brazil |
AI Litigation Project |
Brazil's tax courts use AI to group similar tax appeal cases, assigning them to the same officers for faster processing. Initial trials demonstrated high accuracy, significantly reducing case backlog and improving decision speed. |
Automate mundane tasks |
Tax administration (Box 5.2) |
|
Finland |
RPA and AI in Financial Management |
Finland leverages a tool that automates financial and HR processes through RPA and AI, optimising tasks such as invoice processing. Its structured automation strategy improves scalability and efficiency. |
Automate mundane tasks |
Public finance |
|
Germany |
AI for regulatory impact assessments |
Germany’s Federal Statistical Office is exploring the use of AI to estimate compliance costs in regulatory impact assessments. AI identifies relevant legal text passages and predicts cost implications, allowing officials to focus resources on complex cases. |
Improve productivity in analytical tasks |
Regulation (Box 5.13) |
|
Singapore |
Chatbot for taxpayer services |
Singapore’s tax authority developed a chatbot using AI and NLP to assist taxpayers with inquiries and payments. The system enhances self-service options, reducing administrative workload and improving user satisfaction. |
Tailored services to address personalised needs |
Tax administration (Box 5.4) |
|
Spain |
Knowledge graph |
The National Institute for Public Administration (INAP) AI-enhanced knowledge graph transforms how civil servants access and use vast learning resources. By creating a “resource bank” that improves searchability and recommends relevant materials, INAP enables public officials to efficiently find and apply critical knowledge. |
Tailored approaches to strengthen the civil service |
Civil service reform |
|
Sweden |
BÄR |
The Public Employment Service uses BÄR, an AI tool within the Prepare and Match program, to tailor job-finding support. By analysing jobseekers' profiles and predicting employment chances, it guides decisions and optimises resource allocation through personalised training and guidance recommendations. |
Tailored services to address personalised needs |
Civil service reform (5.43) |
|
United Kingdom |
Outmatch |
The UK tax authority (HMRC) employs Outmatch to automate junior role recruitment by analysing and scoring candidates' recorded responses. This speeds up high-volume hiring while ensuring consistency in evaluation. |
Improve productivity in analytical tasks |
Civil service reform (Box 5.20) |
|
United States |
Chatbot to support procurement |
North Carolina's IT department introduced a 24/7 AI-powered chatbot to support government staff with common procurement queries. It provides instant answers, streamlining processes and reducing wait times. |
Improve productivity in analytical tasks |
Public procurement |
Better decision-making, sense-making and forecasting
The use of AI for enhanced decision-making and sense-making of the present was measured across 18% of cases, while 15% of use cases aimed at better forecasting of the future, and 12% at improving information management and accessibility to support these activities (Figure 2.6). Such uses of AI not only support policymaking processes — which are indeed a minority when it comes to government AI efforts (OECD, 2024[7]) — but also contribute to smarter policy implementation and internal operation, and better quality and pertinence of service design and delivery.
Table 2.4 provides some examples of how AI is being used for these purposes. Governments are using open-source tools like Polis to make better sense of the present, specifically in deliberative exercises, where it clusters public opinions and identifies areas of consensus. Other uses allow governments to better optimise decision-making. An example is Korea’s dBrain+, which analyses real-time financial management data and integrates risk assessment, budget management and performance evaluation tools. Most forecasting use cases aim to predict certain conditions to take decisions in advance and pre-position resources — such as predicting slippery conditions for winter road maintenance in Belgium or forecasting the likelihood of wildfires in Canada — to improve risk mitigation and authorities’ response times. Some other forecasting uses aim to simulate alternative scenarios. One example is Helsinki’s (Finland) use of UrbanistAI to generate visualisations of alternative urban planning scenarios in order to support consensus-building among stakeholders. Finally, the use of AI to improve information management and accessibility can take the form of tools available for stakeholders both inside and outside of government to access vast amounts of data. Examples include the European Parliament’s search tool that allows users to analyse over 20 years of parliamentary documents, or the Netherland’s Court of Audit GenAI pilot platform for analysing public audit reports. It might also take the form of support tools to quickly retrieve accurate government information and ensure reliable responses in customer support services, such as the virtual assistants Caddy in the UK, and Albert in France. While those assistants span domains, some are focused on specific areas, such as France’s Sofia conversational agent for ecological information.
Table 2.4. Examples of AI for better decision-making, sense-making and forecasting
Copy link to Table 2.4. Examples of AI for better decision-making, sense-making and forecasting|
Country |
Initiative |
Description |
Sub-benefit |
Function |
|---|---|---|---|---|
|
Belgium |
AI for road safety |
Belgium predicts slippery conditions and optimises resource allocation for winter road maintenance. By analysing weather and traffic data, the tool helps authorities proactively deploy de-icing measures, improving road safety and reducing accidents. |
Better forecasting of the future |
Public services; related to disaster management |
|
Canada |
Anticipating wildfire risks |
Alberta’s AI wildfire prediction system forecasts the likelihood of wildfires across the province’s protected forests using historical fire, weather and ecological data. The tool assists authorities in pre-positioning resources, improving response times and mitigating risks. |
Better forecasting of the future |
Law enforcement and disaster management (Box 5.54) |
|
European Union |
AI for examining parliamentary documents |
The European Parliament’s search tool enables citizens and policymakers to efficiently analyse over 20 years of parliamentary documents, including 38 000 motions for resolutions and parliamentary questions. By automating information retrieval, the AI system improves accessibility and facilitates informed decision-making. |
Improved information management/accessibility |
Civic participation and open government (Box 5.35) |
|
Finland |
UrbanistAI |
The City of Helsinki used UrbanistAI to generate visualisations of alternative urban planning scenarios, helping citizens and local businesses engage in discussions about pedestrianising key streets. With AI-generated renderings, the tool supported consensus-building among stakeholders. |
Better forecasting of the future |
Civic participation (Box 5.38) |
|
France |
Albert and Sofia |
Albert is a GenAI tool developed to assist public administration employees in responding to citizen inquiries. The tool helps civil servants search for regulations, summarise information and draft responses, while human agents verify the final output. Sofia is a conversational agent that facilitates access to the Ministry of Ecological Transition’s scientific and technical knowledge. |
Improved information management accessibility |
Public services (Box 5.46) |
|
International |
Polis |
Polis is an AI-powered open-source platform designed to facilitate large-scale deliberative processes by clustering public opinions and identifying consensus statements. It has been used in multiple countries to inform climate policy, referendum debates, municipal decision-making and political party platforms. |
Enhanced decision-making and sense-making of the present |
Civic participation (Box 5.36) |
|
Korea |
dBrain+ |
dBrain+ is an AI-driven financial management system that analyses real-time economic, fiscal and financial data to optimise public finance decision-making. It integrates risk assessment, budget management and performance evaluation. |
Enhanced decision-making and sense-making of the present |
Public finance (Box 5.8) |
|
Netherlands |
GenAI platform on public audit work |
The Netherland’s Court of Audit, a public GenAI platform is currently being piloted to allow citizens and other stakeholders to roam through public reports and find answers and sources to their questions on public audit work. |
Improved information management/accessibility |
Fighting corruption and promoting integrity |
|
United Kingdom |
Caddy |
Caddy, an AI-powered assistant developed in the UK, supports customer service agents by quickly retrieving accurate government information. With a human-in-the-loop validation system, Caddy ensures reliable responses while improving efficiency in handling citizen inquiries. |
Improved information management/accessibility |
Public services |
Enhanced accountability and anomaly detection
Regarding uses for enhanced accountability and anomaly detection, 25% of the analysed use cases focused on detecting improper transactions and assessing integrity risks, and 5% on improving governments’ ability to engage non-governmental actors and promote accountability (Figure 2.6). The former covers use cases related to oversight, preventive controls, and risk assessment and management. Those are generally linked to the core mandates of some specific functions of government, such as those responsible for fighting corruption and promoting public integrity or enforcing regulatory compliance. The use cases that better connect government with non-governmental actors, ultimately contributing to greater accountability and responsiveness, are often related to the civic participation and transparency practices used in various government functions and organisations.
Table 2.5 provides some examples of how AI is being used to pursue this benefit. Some uses focus on prioritising cautionary actions based on the analysis of patterns and statistical anomalies. For example, Portugal’s Court of Audit uses AI to detect critical and priority cases in public procurement that might require concentrating audit efforts. In Chile, AI is used in the country’s public procurement platform to detect irregularities and improve compliance monitoring. Other uses are intended to detect loopholes or insufficient safeguards in policymaking, such as assisting corruption prevention officers in evaluating legislation to assess corruption risk factors in legal texts in Lithuania. AI can also support governments’ connection with the public to reinforce accountability. This is the case of some online platforms and tools, the virtual assistant Chatico from Bogotá (Colombia), which has an open government module that eases participation in public campaigns and decision-making processes.
Table 2.5. Examples of AI for enhanced accountability and anomaly detection
Copy link to Table 2.5. Examples of AI for enhanced accountability and anomaly detection|
Country |
Initiative |
Description |
Sub-benefit |
Function |
|---|---|---|---|---|
|
Chile |
ChileCompra |
ChileCompra’s Public Contracting Observatory uses LLM’s to analyse procurement data for irregularities and improve compliance monitoring, enabling more efficient oversight and promoting ethical standards in public procurement. |
Detecting improper transactions and assessing integrity risks |
Public procurement (Box. 5.24) |
|
Colombia |
Chatico |
The city of Bogotá launched Chatico, an AI-powered virtual assistant, to facilitate interactions between citizens and the local administration. Through its website and WhatsApp interfaces, the chatbot eases citizen participation in public campaigns and decision-making processes and offers as well enhanced accessibility to public services. |
Enabling non-governmental actors to understand and engage with government and promote accountability |
Civic participation (Box 5.41) |
|
European Union |
DATACROS |
DATACROS uses AI to detect anomalies in corporate ownership structures that may indicate corruption or money laundering. The system analyses data from over 70 million companies across 44 European countries, flagging hidden patterns and potential illicit activities. |
Detecting improper transactions and assessing integrity risks |
Fighting corruption and promoting integrity (Box 5.27) |
|
Lithuania |
AI to assist corruption prevention officers |
Lithuania is developing an AI-powered tool that uses large language models (LLMs) to assist corruption prevention officers in evaluating corruption risk factors in legal texts, such as loopholes or insufficient safeguards. |
Detecting improper transactions and assessing integrity risks |
Fighting corruption and promoting integrity (Box 5.30) |
|
Portugal |
Assessing procurement risks |
Portugal Court of Audit is implementing AI-driven risk assessment methods to enhance its audits, ensuring the most critical cases in public procurement receive priority. This initiative optimises resources and strengthens accountability in public contracting. |
Detecting improper transactions and assessing integrity risks |
Public procurement |
Unlocking opportunities for external stakeholders through AI as a good for all
Finally, a small minority of use cases (4%) have the potential to unlock opportunities for external stakeholders, such as citizens, civil society organisations and businesses. Such activities are different from general service delivery or new forms and channels of interaction for participation and accountability purposes. Here, AI can empower external actors by enhancing their capabilities and access to information and use government-supported AI systems to achieve their missions and objectives more effectively. Such uses remain marginal and represent a potential gap in government efforts warranting further research and action. As noted above, however, such use cases could potentially be more prevalent in areas not covered by this report.
Table 2.6 provides some examples of how AI is being used for these purposes. Use cases generally include tools in participatory platforms that can be used according to the needs and objectives of users. For example, the participatory platform Decide Madrid (Spain) experimented with AI tools to assist citizens in aggregating and developing their own proposals for action. Similarly, the AI tool MAPLE helps citizens by summarising draft legal texts and allowing them to submit inputs and comments on pending legislation. In the field of disaster risk management, governments can also contribute to goods that can empower citizens for greater resiliency to natural disasters. For example, the Bencana Bot in Indonesia prompts residents to report floods via social media, generating real-time online maps that, combined with official data, can be reused by non-governmental stakeholders. Although the examples here help address the relevant benefit, they do so in a somewhat tangential way, with external elements sometimes benefiting as a positive spillover effect from government activities. Emergent opportunities for governments to open or provision AI systems more directly for external stakeholders may result, such as in areas not currently serviced or appealing to the private sector.
Table 2.6. Examples of AI unlocking opportunities for external stakeholders through AI as a good for all
Copy link to Table 2.6. Examples of AI unlocking opportunities for external stakeholders through AI as a good for all|
Country |
Initiative |
Description |
Function |
|---|---|---|---|
|
Greece |
DidaktorikaAI |
Greece’s DidaktorikaAI platform, launched by the National Documentation Centre (EKT), improves the accessibility of academic and scientific knowledge for policymakers and the broader society through an AI-powered online library gathering more than 50 000 publications. |
Civic participation and open government |
|
Indonesia |
Bencana Bot |
In Jakarta (Indonesia), the AI-powered chatbot Bencana Bot prompts residents to report floods via social media, generating real-time, freely accessible online maps on PetaBencana.id. Combined with official data provided by the Jakarta Disaster Mitigation Agency, the platform helps residents stay safe during emergencies, with usage surging by 2 000% during major floods. |
Law enforcement and disaster risk management |
|
Spain |
Decide Madrid |
In 2021, the participatory platform Decide Madrid (Spain), based on the open-source software Consul, experimented with a Natural Language Processing (NLP) system to assist citizens in aggregating and developing proposals. |
Civic participation and open government |
|
United States |
MAPLE |
The AI-powered tool MAPLE (Massachusetts Platform for Legislative Engagement), which allows citizens to better understand the context and objectives of draft legal texts through AI-generated summaries and to submit their inputs and comments to pending legislation. |
Civic participation and open government |
Some government functions are more mature regarding governing and adopting AI
AI’s potential is recognised across all functions of government, but the maturity of its adoption and governance varies significantly. A few functions are already implementing AI initiatives in a structured manner, learning from their implementation, and in some instances, scaling up successful solutions into other issue areas and in broader contexts. For instance, in public service design and delivery, AI is widely used to automate tasks, retrieve and synthesise information, and improve digital service effectiveness and usefulness for users. In law enforcement and disaster risk management, AI is widely used to prioritise police resources, accelerate investigations and better anticipate and recover from disasters. However, other functions of government have only begun experimenting with limited, ad-hoc prototypes and pilots. In fields such as policy evaluation, public financial management, and regulatory design and delivery, AI adoption is largely found in isolated pilots. In some functions, such as justice administration and fighting corruption and promoting public integrity, there is significant geographical variation. Some countries, like Argentina (Box 5.62 on Prometea) and Spain (Box 5.67 on AI-enabled domestic violence response), actively deploy sophisticated AI solutions in even high-risk areas that manage to mitigate risks otherwise resulting in scandals with similar systems in other contexts. Yet, other countries are still in the early stages of digital transformation. As AI continues to appear in additional use cases, governments must mitigate risk in order to promote responsible AI adoption.
In terms of technical maturity, AI adoption varies not only in scale but also in the types of systems used. Some functions of government, such as tax administration and procurement, rely heavily on rules-based systems, which have been effective in automating structured decision-making processes for many years. Others, like law enforcement and disaster management, and fighting corruption and promoting public integrity, use more advanced ML systems to identify patterns, enhance risk assessments and support decision-making. However, in most functions of government, there is very little use of the latest GenAI models, such as LLMs, which offer new capabilities in knowledge synthesis and content generation that could be more transformational. This trend can be seen in other databases as well — only 61 of the 1 343 (4.5%) AI use cases in the EC Public Sector Tech Watch repository are GenAI (Brizuela et al., 2025[16]). A survey from Deloitte (2024[17]) also found an uneven level of preparedness for GenAI adoption across different sectors of government, though their sector categorisations do not align directly to the government functions in this report. The use of GenAI systems can be seen in a handful of government functions in Chapter 5, such as the design and delivery of regulations and public services, and in civic participation efforts, though many of these efforts appear sporadic or experimental. Some of the most advanced uses take the form of chatbots, which may be highly impactful but may not fully exploit the technology’s potential for large-scale synthesis, tailored content generation or making services more proactive or personalised. The slow adoption of these advanced systems in many domains, in addition to the relative recency of the technology and relevant AI use cases, suggests that while AI experimentation is widespread, the transition to more sophisticated, high-impact systems remains uneven across government functions and countries. This is not to say that governments should abandon all efforts in using more established forms of AI in pursuit of GenAI, as chasing the latest “cool tech” in areas where it is not a good match for the problem to be solved can contribute to AI project failure (Ryseff and Narayanan, 2025[18]).
A key factor shaping AI adoption across functions of government is data availability and quality. In tax administration, for example, AI has been widely deployed due to the abundance of structured data, which has enabled automation and risk assessment for years. Yet, because of the complex legal landscape in this function and regarding taxpayer data, most efforts rely on classic rules-based systems, with challenges in pursuing more modern ML systems that could unlock productivity gains even leveraging unstructured data. By contrast, while AI is increasingly used in private sector HRM functions, its application in civil service reform remains limited due to the insufficiency of comprehensive workforce data — covering employee skills, job demands and performance indicators. The abundance of data in governments does not necessarily translate to the availability of AI-ready data (see Chapter 3 on implementation challenges).
Beyond data, other fields could be facing challenges in AI maturity because the nature of their tasks requires technical capabilities that only recent advanced AI systems could provide. This suggests the potential for rapid acceleration in the coming months. For instance, AI adoption in regulatory design and delivery appears to have increased and accelerated through the use of LLMs to assist with analytical tasks that could have not been performed by other systems (for example, see Box 5.12 for legal and regulatory querying and drafting aids). In another example, AI is enabling mass deliberative civic participation efforts at scales never before feasible.
Different functions of government have different contexts and needs
AI adoption in government is often discussed in broad terms, but its impact and risks vary significantly across functions. Different functions have unique challenges, regulatory constraints and levels of AI readiness. For example, when AI is used in public services to improve healthcare, the applications need to navigate stringent data privacy regulations and ethical concerns around medical decision-making, while AI in other fields, such as civic participation, can be more experimental, using real-time optimisation with less potential to infringe the rights of individuals. Actors in one function of government could use a similar AI solution as in another with vastly different results, impacts and implications. Thus, the discussion of function of government in this section cannot be seen as a likewise comparison, and further research is needed to understand differences, including with a larger scope of analysis.
Use cases could pose risks if not implemented in a trustworthy manner
Copy link to Use cases could pose risks if not implemented in a trustworthy mannerAs discussed in Chapter 1, this report categorises five types of risks faced by governments in adopting AI. In addition to the risks shown in Figure 2.7, the risk of inaction involves missed opportunities and the growing capacity gap between the public and private sector. Such risks, possibly realised by not building capacities for and using AI in government, cannot be visualised and may be difficult or impossible to precisely measure.
The OECD found that every one of the 200 use cases analysed for this report could pose one or more types of risk if not designed and used in a trustworthy way.
Figure 2.7. The potential for operational risks is the most represented across government functions
Copy link to Figure 2.7. The potential for operational risks is the most represented across government functionsNumber of use cases across functions of government categorised according to selected risk types
Note: In parentheses, the number of occurrences of risk types. Use cases can involve more than one type of risk. Thus, the total number of potential risk occurrences is greater than the total number of use cases.
Source: OECD analysis of identified use cases.
Potential operational risks are the most prevalent among the analysed used cases (93%). An example of this is Australia's Robodebt scheme, where investigations revealed inadequacies in algorithmic design played a significant role in its failures and its calculations ultimately being ruled unlawful (Box 5.11). Specifically, the algorithm's oversimplification and lack of safeguards resulted in the issuance of 470 000 incorrect debt notices without human verification. This highlights the operational risks of automating complex social systems without sufficient human oversight or rigorous testing and additionally presented ethical risks resulting in real-world harm.4
Potential ethical risks were the second most prevalent among the analysed use cases (56%), such as in AI-supported applicant review tools used by HRM offices. In reality, ethical risks can be present in the large majority of AI uses cases. However, the lower presence of this risk in the analysed use cases could be due to the limited and specific scope, and manner of application, of use cases identified for this report. An example of an ethical risk that resulted in real-world harm is The Netherlands' Toeslagenaffaire (childcare benefits scandal), where an AI system wrongfully accused 26 000 families of fraudulently claiming childcare benefits due to a biased algorithm that targeted families with dual nationalities or migrant backgrounds; it forced many to repay undue debts (Box 5.6). This case illustrates how ethical risks can cause harm if not mitigated appropriately.
Potential public resistance risks were prevalent across 50% of the analysed use cases. Previous failures in AI deployment have significantly impacted reputations and eroded public trust in the government's capacity to use AI responsibly. These cases underscore the necessity for governments to take steps to prevent risks and to swiftly address potential failures in AI use, fostering public trust. This can be achieved with appropriate guardrails, including strong accountability and redress mechanisms, continuous monitoring and oversight, and effective risk management.
Finally, the potential for risks of exclusion was identified in 38% of the analysed used cases. For example, citizen participation platforms that have deployed AI tools to assist citizens in aggregating and developing proposals may pose challenges for individuals lacking digital skills. This could inadvertently benefit the advantaged, thereby enhancing their ability to further promote their ideas (Duberry et al., 2021[19]; Wang et al., 2024[20]). AI use cases also help civil servants to efficiently process vast quantities of citizen inputs and enhance facilitation of participatory processes. However, the success of these applications is contingent upon the diversity incorporated in the training data of the AI system employed; there is a risk these tools may fail to adequately capture the diversity of public opinion (ECNL, 2024[21]). Many AI-assisted translation tools in participation platforms may also be subject to not capturing nuances and understanding different cultural contexts for languages of minorities, as they are usually trained on data from English and other dominant languages (ECNL, 2024[21]).5
The greater or lower presence of certain risks across the functions of government is of interest, too. For instance, observed cases in justice administration and civil service reform appear to more prominently feature ethical risks due to the potential for adverse outcomes in some of its cases. In the tax administration field, existing controls and regulations might create safeguards against ethical and exclusion risks, making operational risks more prevalent in the field. In civic participation, public resistance risks take more prevalence, compared to other functions. This is likely because most of its use cases include government-to-citizen interactions and thus exhibit greater dependence on public acceptance. This is similar for public procurement, where suppliers’ perceptions of and public trust in AI systems play a key role in their success; thus featured more prevalently in resistance risks. This comparative analysis shows the importance of acknowledging the main drivers of potential risks in each field, which can inform how to mitigate and manage them.
Governments are vigilant of several AI risks, though some may be overlooked
Overall, through conducting analysis and interacting with governments in the development of Chapters 4 and 5, it appears many national governments are well-informed with regard to, and have put in place, processes to manage risk associated with data, a lack of transparency and explainability, and AI misuse — either intentional or inadvertent — and the potential for resulting harms or privacy infringement. This is positive and not surprising, as these risks are often raised by AI experts and in research both within and beyond the public sector (OECD, 2024[13]).6 To a somewhat lesser extent, the risk of an overreliance on AI technologies also appears to be a consideration for government efforts. Overall, however, there appears to be less of an emphasis on ensuring that government use of AI does not further exacerbate digital divides. When complementary service channels are not made available, government ambitions to automate and streamline processes could result in the reduction of service opportunities for communities with less access to digital services or preference for non-digital approaches (Welby and Hui Yan Tan, 2022[22]). In addition, while governments are clearly aware of how AI could contribute to productivity and help shift public servants’ efforts away from repetitive tasks and towards more meaningful work, there is seemingly less recognition of AI’s potential to reduce job quality (e.g. through invasive algorithmic management) or for job displacement (Peixoto, Canuto and Jordan, 2024[23]). Even if AI adoption becomes systematic in public administrations, governments will need to ensure that all citizens are properly served and the concerns and any rights of the civil service are taken into account in order to promote AI adoption.
Finally, governments need to better consider the risk of inaction. Throughout Chapters 4 and 5, it is clear that many governments are working towards specific goals with their AI efforts, and that they are aware of and are seeking to mitigate a variety of AI risks by instantiating approaches to unlock the potential of AI. However, the analysis conducted for this report and discussions with relevant government officials indicate there may be limited awareness of missed opportunities due to slow AI adoption or the consequences of the widening gap in AI capabilities between the public and private sectors.
Few governments have assessed the extent to which AI could make an impact in their internal operations and public-facing services. Furthermore, most have not fully articulated their ambitions for AI in government, determined the existing gaps, or proposed clarified roadmaps to close those gaps and meet the goals. Governments need to explore AI not only to enhance the design and implementation of public policies and services, but also to ensure they have the knowledge and capacity to regulate AI development and use in government and beyond. In an inquiry by the Parliament of Australia (2025[24]), the Joint Committee of Public Accounts and Audit expressed "very grave concern" that AI will soon outpace the government's ability to regulate it. Such committee findings are not necessarily representative of the Australian Government’s views. AI experts have also noted the inability of governance mechanisms and institutions to keep pace with rapid AI evolutions is one of the most critical risks associated with AI (OECD, 2024[13]).
In addition to these critical risks associated with the AI use in government, the OECD’s analysis of 200 use cases has identified a variety of challenges governments can face in adopting the technology. These implementation challenges are shared across all functions, while others are more prevalent in certain fields. They can translate into broader issues and can hinder the strategic use of AI in government. These issues are discussed in the following chapter.
References
[17] Austin, T. et al. (2024), A snapshot of how public sector leaders feel about generative AI, https://www2.deloitte.com/us/en/insights/industry/public-sector/ai-adoption-in-public-sector.html.
[12] Bailey, K. (2016), Reframing the “AI Effect”, https://medium.com/@katherinebailey/reframing-the-ai-effect-c445f87ea98b.
[15] Berryhill, J. et al. (2019), “Hello, World: Artificial intelligence and its use in the public sector”, OECD Working Papers on Public Governance, No. 36, OECD Publishing, Paris, https://doi.org/10.1787/726fd39d-en.
[16] Brizuela, A. et al. (2025), Analysis of the generative AI landscape in the European public sector, European Commission, https://op.europa.eu/s/z4XY.
[9] Columbia University (2020), The Future of AI in the Brazilian Judicial System, https://www.sipa.columbia.edu/aidriven-innovations-brazilian-judiciary.
[19] Duberry, J. et al. (2021), “Artificial intelligence and civil society participation in policy-making processes: Thinking about AI and participation.”, SSRN Electronic Journal, https://doi.org/10.2139/ssrn.3817666.
[10] EC (2025), Public Sector Tech Watch latest dataset of selected cases, http://data.europa.eu/89h/e8e7bddd-8510-4936-9fa6-7e1b399cbd92 (accessed on 4 April 2025).
[21] ECNL (2024), Can AI tools and platforms make public engagement truly meaningful and inclusive?, https://ecnl.org/news/ai-public-participation-hope-or-hype (accessed on 17 March 2025).
[4] Government of Canada (2025), AI Strategy for the Federal Public Service, https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/gc-ai-strategy-overview.html.
[5] Government of Switzerland (2025), Strategy Use of AI systems in the Federal Administration, https://www.bk.admin.ch/bk/en/home/digitale-transformation-ikt-lenkung/ikt-vorgaben/strategien-teilstrategien/sb021-strategie-einsatz-von-ki-systemen-in-der-bundesverwaltung.html.
[6] Government of Uruguay (2021), AI Strategy for the Digital Government, https://www.gub.uy/agencia-gobierno-electronico-sociedad-informacion-conocimiento/comunicacion/publicaciones/ia-strategy-english-version/ia-strategy-english-version/ai-strategy-for.
[8] Harvard Kennedy School (2023), AI, judges and judgement: setting the scene, https://www.hks.harvard.edu/centers/mrcbg/publications/awp/awp220.
[11] Muñoz-Cadena, S. et al. (2025), Sistemas de IA en el sector público de América Latina y el Caribe (Versión V2), https://sistemaspublicos.tech/sistemas-de-ia-en-america-latina/ (accessed on 29 April 2025).
[25] Netherlands Court of Audit (2024), Central government often does not assess risks of AI, https://english.rekenkamer.nl/latest/news/2024/10/16/central-government-often-does-not-assess-risks-of-ai.
[7] OECD (2024), “2023 OECD Digital Government Index: Results and key findings”, OECD Public Governance Policy Papers, No. 44, OECD Publishing, Paris, https://doi.org/10.1787/1a89ed5e-en.
[13] OECD (2024), “Assessing potential future artificial intelligence risks, benefits and policy imperatives”, OECD Artificial Intelligence Papers, No. 27, OECD Publishing, Paris, https://doi.org/10.1787/3f4e3dfb-en.
[1] OECD (2024), “Governing with Artificial Intelligence: Are governments ready?”, OECD Artificial Intelligence Papers, No. 20, OECD Publishing, Paris, https://doi.org/10.1787/26324bc2-en.
[2] OECD (2023), “The state of implementation of the OECD AI Principles four years on”, OECD Artificial Intelligence Papers, No. 3, OECD Publishing, Paris, https://doi.org/10.1787/835641c9-en.
[3] OECD (2021), “State of implementation of the OECD AI Principles: Insights from national AI policies”, OECD Digital Economy Papers, No. 311, OECD Publishing, Paris, https://doi.org/10.1787/1cd40c44-en.
[24] Parliament of Australia (2025), Report 510: Inquiry into the use and governance of artificial intelligence systems by public sector entities - ’Proceed with Caution’, https://parlinfo.aph.gov.au/parlInfo/download/committees/reportjnt/RB000567/toc_pdf/Report510Inquiryintotheuseandgovernanceofartificialintelligencesystemsbypublicsectorentities-'ProceedwithCaution'.pdf.
[23] Peixoto, T., O. Canuto and L. Jordan (2024), “AI and the Future of Government: Unexpected Effects and Critical Challenges”, Policy briefs on Economic Trends and Policies, Vol. 2408, https://ideas.repec.org/p/ocp/pbecon/pb_10-24.html.
[18] Ryseff, J. and A. Narayanan (2025), Why AI Projects Fail, https://www.rand.org/pubs/presentations/PTA2680-1.html.
[14] The Alan Turing Institute (2024), AI for bureaucratic productivity: Measuring the potential of AI to help automate 143 million UK government transactions, https://www.turing.ac.uk/news/publications/ai-bureaucratic-productivity-measuring-potential-ai-help-automate-143-million-uk.
[20] Wang, C. et al. (2024), “The artificial intelligence divide: Who is the most vulnerable?”, New Media & Society, https://doi.org/10.1177/14614448241232345.
[22] Welby, B. and E. Hui Yan Tan (2022), “Designing and delivering public services in the digital age”, OECD Going Digital Toolkit Notes, No. 22, OECD Publishing, Paris, https://doi.org/10.1787/e056ef99-en.
[26] Yigitcanlar, T. et al. (2024), Local governments are using AI without clear rules or policies, and the public has no idea, https://theconversation.com/local-governments-are-using-ai-without-clear-rules-or-policies-and-the-public-has-no-idea-244647.
Notes
Copy link to Notes← 1. Not all of the use cases analysed appear in this report. The OECD made a selection of cases for the report that best illustrated various themes and findings presented.
← 2. See https://oecd-opsi.org/innovation-tag/artificial-intelligence-ai and https://oecd.ai/dashboards/policy-instruments/AI_use_cases_in_the_public_sector, respectively. The OPSI collection included a global open “Call for Innovations” crowdsourcing exercise focused on innovations in public services in 2024.
← 3. As categorised in the observatory’s Classification of the Functions of Government. See https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Glossary%3AClassification_of_the_functions_of_government_%28COFOG%29.
← 4. The Robodebt scheme leveraged automated data-matching, income averaging and overpayment calculation. As discussed in Box 1.1 of this report, many argue that such systems should not be considered AI at all. Thus, the Robodebt scheme could be better described as an automated decision-making system. Nevertheless, it helps to illustrate issues in governance, ethical oversight and algorithmic design.
← 5. See also the related discussion on Exclusion Risks in Chapter 1.
← 6. While governments may be informed on these and other issues, and as indicated in Chapter 4, many have frameworks and processes in place to mitigate risks, they need to take action and follow through on their frameworks and processes in order to manage risks. There are instances in which governments may not fully comply with internal requirements or may face incentives to consider systems as low-risk, thus reducing accountability requirements (Netherlands Court of Audit, 2024[25]). In addition, some research suggests that local governments may not be as informed or prone to undergoing risk mitigation activities as national governments (Yigitcanlar et al., 2024[26]). Finally, this report does not seek to evaluate the quality and effectiveness of government processes and mechanisms, though it does seek to highlight those that appear sound and are emerging as best practices.