This chapter explains how artificial intelligence (AI) can accelerate the digital government journey. It situates government as a developer and user of AI, going beyond traditional investor and regulator roles. The chapter groups opportunities – productivity (efficiency and effectiveness), responsiveness and accountability – across the policy cycle, and stresses prerequisites in data and information management. It also outlines government-specific risks and risks of inaction, within emerging regulatory approaches, and closes with a vision for trustworthy AI in government.
Governing with Artificial Intelligence
1. How artificial intelligence is accelerating the digital government journey
Copy link to 1. How artificial intelligence is accelerating the digital government journeyAbstract
Key messages
Copy link to Key messagesArtificial intelligence (AI) has the potential to reshape industries, economies, governments and societies. Yet, its progress in the government has been limited.
AI can help governments in three key opportunity areas: productivity, responsiveness and accountability.
At each stage of the policy cycle, AI can bring highly complementary benefits:
automating mundane and repetitive tasks
improving productivity in analytical or creative tasks
tailoring services to address personalised citizen needs
tailoring approaches to strengthen the civil service
enhancing decision-making and sense-making of the present
better forecasting of the future
improving information management and accessibility
detecting improper transactions and assessing integrity risks
enabling non-governmental actors to understand and engage with government and promote accountability
unlocking opportunities for external stakeholders through AI as a good for all.
These benefits are not mutually exclusive and can be categorised into four broad areas:
automated, streamlined and tailored processes and services
better decision-making, sense-making and forecasting
enhanced accountability and anomaly detection
unlocking opportunities for external stakeholders.
Governments should manage the risks of AI that are specific to government use, which are: ethical risks, operational risks, exclusion risks, public resistance risks and risks of inaction.
A vision of a future where governments successfully develop and adopt trustworthy AI for systematic transformation of government processes and services is beginning to emerge.
The digital government journey
Copy link to The digital government journeyDigital government is essential to transforming processes and services in ways that improve the public sector’s responsiveness and reliability and bring governments closer to their people. Since the adoption of the OECD Recommendation on Digital Government Strategies (2014[1]), the OECD has been promoting digital government in OECD member countries and beyond, supporting them in their efforts to achieve government digital maturity. Digitally mature governments recognise that technology is a strategic driver not only to improve efficiency, but also to make policies more effective and governments more open, accountable, innovative, participatory and trustworthy.
The COVID-19 pandemic underscored the importance of digital technologies and data in building economic and social resilience through strategic, agile and innovative government approaches. While the pandemic and the multidimensional crisis it provoked disrupted governments, it also offered an opportunity to revisit strategic approaches on the use of digital tools and data to improve the delivery of public services. Faced with no alternative, governments compressed years’ worth of technological advancements into weeks and months. Deploying technology solutions at scale enabled governments to continue operating in times of crisis, and secured the timely provision of services to citizens and businesses (OECD, 2020[2]; [3]). Where digital technologies or data were not used strategically or effectively, the crisis highlighted gaps and exacerbated challenges, which governments are working to address to this day.
Today, governments worldwide are facing decreasing levels of public trust (OECD, 2024[4]), while simultaneously experiencing growing and rapidly accelerating changes brought about by the digital age. In this time of fast-paced disruption — rapid technological evolution, changing societal needs, unexpected crises — it is crucial governments be capable and equipped to use digital technologies and data to increase productivity and resilience in their public administrations, and enhance the quality of public services.
Institutionalising digital government, with varying maturity levels
To unlock the potential of digital government, establishing the right institutional arrangements, coordination mechanisms and policy instruments is critical to sustaining the needed long-term transformations and overcoming changing political priorities. The OECD (2020[3]) Digital Government Policy Framework establishes six dimensions critical for establishing a digital government:
1. Digital by design: designing policies to enable the public sector to use digital tools and data in a coherent way when formulating policies or transforming public services.
2. Data-driven: developing the governance and enablers needed for data access, sharing and re-use across the public sector.
3. Government as a platform: deploying common building blocks such as guidelines, tools, data, digital identity and software to advance a coherent transformation of government processes and services across the public sector.
4. Open by default: openness beyond the release of open data, including efforts to encourage the use of technologies and data to communicate and engage with different actors.
5. User-driven: placing user needs at the core of the design and delivery of public policies and services, including through engagement with users and measurement of metrics to assess impact and satisfaction.
6. Proactiveness: anticipating the needs of users and service providers to deliver government services proactively.
The OECD Digital Government Index (DGI) benchmarks governments’ maturity across these dimensions (Figure 1.1). In this figure, it is clear that some countries are further progressed in their journey towards digital government maturity than others, and the broad array of OECD analysis on digital government illuminates that each country faces its own challenges in achieving maturity.1
Figure 1.1. OECD 2023 Digital Government Index, composite results by country
Copy link to Figure 1.1. OECD 2023 Digital Government Index, composite results by country
Note: Data for Germany, Greece, Slovakia, Switzerland and the United States (US) are not included.
Source: (OECD, 2024[5]).
AI’s growing role in digital government
The OECD defines an AI system as:
“a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” – see OECD Explanatory Memorandum for further clarification (2024[6]; [7]).
Global AI discussions mainly focus on governments as AI regulators or investors, but significant opportunities exist for government as an AI developer and user. Not only do governments set national priorities, investments and regulations for AI, but they increasingly use it to design and implement policies and services. Although hype around AI has increased in recent years, governments are not new to using AI; there are thousands of government AI projects underway around the world.2
Since 2019, the OECD has been working to better understand AI’s uses and implications in the unique context of government. This includes developing foundational pieces on the technical underpinnings of AI and its use and implications by and for government (2019[8]; [9]); targeted analysis on specific countries (2022[10]; 2024[11]; [12]); surfacing government innovation trends, which often involve AI;3 and establishing a preliminary framework for AI in government (2024[13]). The OECD has also collected details on hundreds of initiatives for AI in government.4
The 2023 DGI highlights while some governments have been deploying a wide range of initiatives to enhance their capacity to use AI, implementation is still a challenge for most. At the time of the DGI’s publishing, 70% of countries had used AI to improve internal governmental processes, while only 33% had used AI to enhance policy design and implementation. Although use is increasing, AI use in government has not yet made a transformative impact. The forthcoming 2025 DGI will include updated figures and more in-depth comparative analysis. It will also incorporate complementary qualitative evidence to further inform how governments can implement the right enablers, safeguards, risk mitigation and engagement mechanisms to adopt trustworthy AI while monitoring for adverse effects.
Understanding AI’s transformative potential
Copy link to Understanding AI’s transformative potentialAI is one of the most transformative forces of the 21st century. It is reshaping industries, economies, governments and societies at an unprecedented pace. If governments and other AI actors are successful in seizing AI’s benefits while mitigating its risks, AI experts and researchers envision a future in which AI contributes to scientific and medical breakthroughs, such as discovering new cancer treatments; catalyses productivity growth from a 1-7% rise in global gross domestic product (GDP) by 2033 to a 10-fold increase in the decades to come; eliminates poverty and reduces inequality; and helps address weather-related impacts and natural disasters (OECD, 2024[14]).
While AI has gained intense worldwide attention in recent years, AI research and development has been going on for over 70 years. Before taking a deeper look at AI use in government, it is useful to understand some of the background on AI and why it has recently become a topic of household discussion, as discussed in Box 1.1.
Box 1.1. The evolution of AI
Copy link to Box 1.1. The evolution of AIThe AI landscape has evolved significantly since the 1950s when British mathematician Alan Turing first posed the question of whether machines can think. For decades, “rules-based” or “symbolic” AI systems dominated research, using a series of “If-then” (If a condition, then an action) statements that, when taken together, would give the appearance of intelligent action. Such systems are limited and require significant human knowledge to programme the rules. They are still in use today, such as in robotic process automation (RPA) software bots that automate human-programmed tasks. Due to their limitations, some argue that rules-based systems and RPA should not be considered AI at all.
The 21st century saw breakthroughs in the branch of AI called machine learning (ML) that improved the ability of machines to make predictions from historical data. ML focuses on the development of systems that can learn and adapt without following explicit instructions imitating the way humans learn, gradually improving its accuracy, by using algorithms and statistical models to analyse and draw inferences from patterns in data. The “learning” process using machine-learning techniques is known as “training”.
The application of ML techniques, the availability of large datasets, and faster and more powerful computing hardware have converged, dramatically increasing the capabilities, impact and availability of AI models and systems. Inspired by the human brain, neural networks are made up of layers of “neurons”, known as “nodes”, that process inputs with weights and biases to give specific outputs. A subset of algorithms in the area of neural networks — called deep neural networks (in the field of study and set of techniques called deep learning) — allows machine-based systems to “learn” from examples to make predictions or “inferences” based on the large amount of data processed during their training phase. Because of their complexity, it can be difficult to understand how they work or produce a given output.
Recent conceptual breakthroughs
In 2017, Google researchers introduced a type of neural network architecture called “transformers”, which learn to detect how data elements — such as the words in this sentence — influence and depend on each other. Unlike previous neural networks, transformers can process inputs from a sequence, such as words of text, in parallel. This unleashed major progress by enabling AI developers to design larger-scale language models with more parameters and greater efficiency. This contributed greatly to advances in generative AI (GenAI), including large language models (LLMs), that can generate novel content and enable consumer-facing applications like advanced chatbots at people’s fingertips.
For many, AI became “real” in 2022, the year that OpenAI’s ChatGPT (Chat Generative Pre-Trained Transformer) became the fastest-growing consumer application in history. Transformers also contributed to the advent of foundation models, which are trained on large amounts of data that can be adapted (i.e. fine-tuned) and built upon to conduct a wide range of downstream tasks. Although transformers are often discussed, other approaches exist, especially for non-text (e.g. images, video, audio) generation, such as generative adversarial network (GANs) and diffusion models.
Most AI today is “narrow”, but some argue more “general” forms of AI are emerging
Most AI today can be considered “narrow” (designed to perform a specific task), but some experts argue that foundation models are an early form of more “general” AI. This includes progress towards the hypothetical advent of artificial general intelligence (AGI) — a controversial concept that can be described as machines with human-level or greater intelligence across a broad spectrum of contexts. There is substantial debate and uncertainty among experts about when or if AGI might be developed, and what potential opportunities and challenges it may bring.
While some experts believe AGI will be developed at some point, emerging early forms of “agentic” AI systems — which can operate in a somewhat autonomous manner without the constant need for human guidance — hint at the potential for future systems that can handle more general tasks with minimal human input. For instance, LLM-based “agents” have already been developed to autonomously search the internet and interpret what they find on behalf of the user. Such systems are very early, comprising many limitations and risks, but further advancements may yield opportunities across sectors.
As AI systems become increasingly capable, many argue that humans should not defer decisions to machines but rather work in tandem, or in machine-human collaboration, using AI to assist decision-making.
Note: The OECD primer Hello World: Artificial intelligence and its use in the public sector (Berryhill et al., 2019[8]) provides details on the technical underpinnings and potential implications of AI.
Despite the hype, AI progress is limited
Data from the OECD.AI Policy Observatory demonstrates the boom in interest in AI in recent years. For example, Figure 1.2 shows significant growth in venture capital investments in AI over time. While interest is high, advisory firm Gartner’s latest “hype cycle” puts GenAI just beyond its “peak of inflated expectations”, and starting a descent into the “trough of disillusionment”, “as business focus continues to shift from excitement around foundation models to use cases that drive ROI” (Gartner, 2024[23]). Still, it expects GenAI and some other forms of AI, such as AI supercomputing and the use of AI to support and enforce AI governance policies, trust, risk and security, to reach fuller productivity in just two to five years.
Figure 1.2. Venture capital investments in AI have grown over the years
Copy link to Figure 1.2. Venture capital investments in AI have grown over the yearsVenture global capital investments in AI in USD millions by country from 2012 onwards
Note: A methodological note with more information can be found at https://oecd.ai/p/methodology. The surge in investments in 2021 was in part due to an increase in “healthcare, drugs and biotechnology” AI investments during the COVID-19 pandemic. A significant spike that year was also seen in “Mobility and autonomous vehicles”.
Source: OECD.AI (2025), visualisations powered by JSI using data from Preqin, last updated 3 June 2025 accessed on 16 June 2025, www.oecd.ai.
Although some AI experts predict significant economic gains from AI, OECD (2024[24]) research indicates more tepid growth, estimating annual productivity growth due to AI ranging between 0.25-0.6 percentage points over the next 10 years in the most AI-ready countries. Research shows AI improves individual worker productivity (OECD, 2023[25]; Bengio et al., 2025[26]), but evidence connecting this to broader organisational and economic gains is weak. This is, in part, because some tasks cannot yet be conducted by AI and not all organisations or workers are ready to adopt it. Some evidence suggests firms adopting AI are more productive and grow faster than those that do not (Calvino and Fontanelli, 2023[27]; Hampole et al., 2025[28]), but this should not be interpreted as causality. For now, limitations persist. According to US Census Bureau statistics, only 5-6% of US businesses use AI to produce goods and services and only 7% plan to adopt AI in the coming months (Williams, 2025[29]). In a more global survey, only an estimated 26% of companies have the capabilities needed to derive real value from AI, and only 4% have succeeded in generating significant value (BCG, 2024[30]).5
Beyond economic gains, AI’s transformative potential to achieve positive societal outcomes is beginning to show signs of promise. However, its full impact has yet to be realised. For instance, AI in science has contributed to real progress in robotics, nuclear fusion, drug discovery, antibody generation and protein folding (OECD, 2023[31]). Despite these early successes, many uses remain localised or experimental, and systemic change on a global scale is still forthcoming. AI’s contribution to science is just beginning, and in some areas, the technology may have achieved less than anticipated. For example, some found AI contributed little to research during the COVID-19 pandemic (OECD, 2023[31]). So far, AI has mostly contributed to breakthroughs in a narrow set of natural and physical sciences. Similar transformations in other disciplines, such as social sciences, have progressed less despite high expectations (Manning, Zhu and Horton, 2024[32]). As such, while AI's societal benefits are emerging, the full scope of its transformative potential is still unfolding.
Using AI in government, a unique context
Copy link to Using AI in government, a unique contextIn addition to governing AI for society by setting the conditions and regulations for its trustworthy use, governments are striving to integrate the technology to better govern with AI. Similar to the private sector, the use of AI in government promises tremendous benefits while posing a number of risks and challenges. In fact, a Deloitte survey (2024[33]) of 2 770 senior leaders across 14 countries found public sector leaders were twice as likely as industry leaders to foresee AI-driven transformation in their organisations in the near term, but they felt more cautious and were less optimistic that it would result in productivity gains. Yet the subject has only recently become a focus of mainstream public management literature and many governments (Mergel et al., 2023[34]; Mellouli, Janssen and Ojo, 2024[35]). This is due to a combination of factors, including recent technological breakthroughs resulting in AI applications that are more practical and effective for government use (see Box 1.1); government access to vast amount of data that can be used as inputs for AI systems; and ongoing fiscal pressures that make AI attractive as a way to streamline operations and reduce costs. As a result, government use of AI trails that of the private sector.
While some lessons learned and success factors can be derived from industry efforts (Santos et al., 2024[36]), the purpose of and context within government are unique and present a number of specific opportunities and challenges. In addition, the field of AI is complex, progressing rapidly and has a steep learning curve for public servants and policymakers. If successful, however, the application of AI in government promises to significantly impact the wider economy and society by enhancing the quality and outcomes of public services, policies and government operations (Berglind, Fadia and Isherwood, 2022[37]).
Governments have huge influence over and impact in people’s lives, bringing with it a duty of care for the public good — one that goes above that of companies (OECD, 2023[38]; Santiso, 2023[39]). Thus, they have a special responsibility to deploy AI in a way that minimises harm and prioritises the well-being of individuals and communities. This is especially the case when deploying AI in sensitive policy domains such as law enforcement, immigration control, welfare benefits and fraud prevention (OECD, 2024[13]).
Governments also operate with a unique mandate: they serve the public interest and are funded by public resources. As such, their actions — particularly those involving data and digital technologies — need to be guided by principles that uphold democratic values, individual rights and the rule of law. Unlike private entities, which may prioritise efficiency or profit, governments are expected to act transparently and with due regard for the public good to a greater extent than companies.
Key opportunity areas and benefits for AI in government
Copy link to Key opportunity areas and benefits for AI in governmentOpportunities for governments as developers and users of AI include the potential to transform service delivery, policymaking, internal operations and oversight. This is a pivotal moment for governments worldwide. Grappling with the rapid advances in AI technologies, they are trying to seize the opportunities provided by AI to innovate and modernise public administration, while managing and mitigating the associated risks, discussed below, and implementation challenges, discussed in Chapter 3.
Embracing AI in government unlocks new possibilities. Through years of research on the topic and working with governments around the world, the OECD (2024[13]) has identified three concrete opportunity areas for government use of AI:
Productivity with more efficient internal operations and more effective policy design, decision-making and service delivery. For example, using predictive AI systems for more effective policy planning, automating processes for more accelerated service delivery, and boosting performance by allowing civil servants to focus less on mundane tasks and more on mission-critical activities.
Responsiveness of public policies and service through enhanced design and delivery approaches that better meet evolving needs of citizens and specific communities, as well as through improved civic participation mechanisms. This includes offering more personalised public services more proactively.
Accountability by enhancing capacity for oversight and transparency, for instance through real-time monitoring. This shift may boost overall public satisfaction and enhance the perception of government as competent, fair and responsive, thereby strengthening public trust in government’s capacity for innovation and transformation.
Table 1.1 illustrates how various AI tasks can feed into government activities, thus supporting these opportunity areas.
Table 1.1. Understanding the use of AI in government
Copy link to Table 1.1. Understanding the use of AI in government|
AI tasks |
Government activity |
Opportunity area |
|
- Recognition - Event detection - Forecasting - Personalisation - Interaction support - Goal-driven optimisation - Content generation - Reasoning with knowledge structures |
Internal operations |
Productivity (efficiency and effectiveness) |
|
Policymaking |
||
|
Responsiveness |
||
|
Service delivery |
||
|
Accountability |
||
|
Internal and external oversight |
Note: The AI tasks column is adapted from the “AI System Tasks” of the OECD Framework for the Classification of AI Systems (2022[40]).
Source: (OECD, 2024[13]).
Key benefits for AI in government
To guide investment decisions, it is crucial for public servants, especially decision-makers in leadership positions, to understand the benefits AI can offer. A study by the European Commission (EC) (2024[41]), which surveyed 576 public managers across seven countries, found that AI’s perceived benefits significantly influence its adoption. AI has the potential to enhance decision-making at various stages of the policy cycle (Figure 1.3).6 The sections below outline key benefits of using AI in government. These benefits are not mutually exclusive and indeed are highly complementary with some overlap in four concepts: automated, streamlined and tailored processes and services; better decision-making, sense-making and forecasting; enhanced accountability and anomaly detection; and unlocking opportunities for external stakeholders through AI as a good for all. It is important to note, however, that the use of AI in government can also pose risks. Several of these risks can be the converse of or undermine the potential benefits. The next section provides a dedicated discussion on risks.
Figure 1.3. AI at each stage of the policy cycle
Copy link to Figure 1.3. AI at each stage of the policy cycleAutomated, streamlined and tailored processes and services
AI-enabled automation can help in directly automating existing processes and services, or contribute to the full re-imagining of how governments work both in internal operations as well as in public-facing services. In leveraging vast data assets, governments can also use AI to develop tailored services precisely crafted for specific individuals and groups. These benefits not only make government more efficient, effective and response, but they also can improve job quality and well-being for public servants by enabling them to spend more time on more valuable and meaningful work. This has been shown to improve workers’ well-being. Nearly two-thirds of workers surveyed by the OECD (2023[25]) reported that AI improved their enjoyment of work, with studies showing this can enhance workers’ well-being (Brougham and Haar, 2017[43]; Xu, Xue and Zhao, 2023[44]). However, as discussed below, some AI uses can reduce job quality and potentially lead to public service workforce displacement.
Automating mundane and repetitive tasks
Governments can use AI to the enhance efficiency of their internal operations and service delivery activities, reducing the time public officials invest in monotonous tasks (OECD, 2024[13]). Typically, these internal operations are repetitive and do not require extensive analytical thinking or human judgment. By automating these tasks, AI can streamline workflows, reduce errors, optimise resource allocation and free up human resources for more complex, judgment-intensive activities. Ultimately, this leads to a more efficient delivery of higher quality public services (OECD/UNESCO, 2024[12]).
Repetitive and time-consuming tasks include:
Data entry: manually inputting data into various systems and databases.
Payroll processing: calculating and processing employee salaries and benefits.
Basic customer inquiries: addressing routine inquiries and providing information to the public.
Information verification: checking and verifying the authenticity of documents.
Form processing: handling and processing various application forms.
Email and correspondence: sorting, responding to and managing official emails and correspondence.
Governments can use of a variety of AI systems for these types of tasks, ranging from simplistic rules-based systems to more advanced ML systems, such as LLM-enabled chatbots. These systems have extensive capabilities ranging from handling simple and routine inquiries (both with citizens as well as with public servants) to generating entirely new tailored content and optimising resource allocation (Lorenz, Perset and Berryhill, 2023[17]; Sapci and Sapci, 2019[45]).
As an example, Argentina is automating repetitive tasks and expediting case processing in justice administration through its AI-driven Prometea system (see Chapter 5, Box 5.62). As discussed below, however, AI-enabled automation can pose risks — in areas such as justice administration — that governments need to consider. In the case of Prometea, for example, Argentina seeks to limit these risks through human control of how the AI system and its outputs are used (Corvalán and Le Fevre Cervini, 2020[46]).
Improving productivity in analytical or creative tasks
AI further contributes to productivity through the discovery of new ideas and through new efficient and effective means of conducting work (Jones, 2022[47]). One of AI’s most notable benefits is its ability to help government offices handle and manage the analysis and synthesis of extensive documentation (OECD/UNESCO, 2024[12]). While a variety of AI-enabled tools may be useful, LLMs in particular can serve as powerful assistants for government officials in this regard, aiding in tasks such as research, content summarisation and synthesis (Berglind, Fadia and Isherwood, 2022[37]). Research conducted with knowledge professionals in the private sector showed that AI use can improve the performance of individual and teams, and also break down functional siloes (Dell’Acqua et al., 2025[48]). Governments can use AI to reduce civil servants’ workload and to improve access to information for both citizens and public servants.
Some relevant uses include:
Processing and categorising textual information: AI tools can quickly and accurately analyse extensive texts and unstructured documents, highlighting key points and summarising information. This enhances efficiency in departments dealing with vast amounts of information, such as legal affairs and administrative processes (OECD/UNESCO, 2024[12]). Consequently, workflows are sped up and the risk of human error is reduced, leading to more accurate and reliable outcomes in internal operations and in service delivery activities.
Drafting documents and legal texts: AI systems can generate preliminary drafts of various types of documents by using templates and existing legislation. This process helps ensure adherence to standards while saving time and resources. Additionally, AI can cross-reference new drafts with current laws, identifying potential conflicts and minimising human errors. For report drafting, AI tools can offer automatic suggestions for clearer and more concise structures. Furthermore, it can improve communication of extensive reports by summarising them into shorter formats for dissemination to decision-makers or the public.
Making sense of unstructured inputs: AI can analyse and synthesise large amounts of information from participatory processes, public service feedback and consultations, turning them into actionable recommendations. It can identify recurring topics, cluster opinions, detect outliers, perform sentiment analysis and rank policy options based on preferences. This use can help identifying emerging issues, better considering stakeholder concerns and addressing potential policy impacts.
Studies show that generative AI systems reduce the time people spend conducting tasks while improving quality output. They also show that these tools have a greater impact on the productivity of lower-skilled workers, such as junior employees, allowing them to catch up with their more senior colleagues (Noy and Zhang, 2023[49]; Peng et al., 2023[50]). This can produce and equalising effect and drive productivity gains by helping workers conduct many tasks typically handled by subject matter experts (OECD, 2023[25]). However, some research also indicates that AI could contribute to further divides between higher-skilled and lower-skilled workers, and that while existing AI systems can increase worker productivity, they still cannot perform many tasks that humans can (The Economist, 2025[51]; Dell’Acqua et al., 2023[52]; Bengio et al., 2025[26]). More research is needed on this topic, including AI’s specific advantages and potential drawbacks for public servants.
As an example, the UK tax authority is using AI to draft job descriptions and to analyse and evaluate the qualifications of job applications to speed up hiring (Box 5.20). As discussed below, using AI can pose risks if not done in a trustworthy manner.
AI can also play a role as a catalyst for creativity and innovation among public servants and how they design and implement internal processes and public policies and services. Generative AI, for instance, can support the exploration of policy alternatives, scenario simulations, legislative drafting and service prototyping, fostering a more imaginative and experimental public administration. For example, the UK’s Government Communication Service is developing an AI-powered conversational tool to generate draft texts, plans and strategic ideas, integrating communication guidelines and audience insights to ensure high-quality, compliant outputs (Box 5.39). Designed as a collaborative assistant, it boosts creativity, reduces routine workload, and is being gradually rolled out after successful piloting and iterative AI-driven refinement.
Tailoring services to address personalised citizen needs
AI can help governments to better understand people’s needs and behaviours, and facilitate the delivery of targeted and personalised information and services at an individual level (Huang and Rust, 2021[53]; Flavián and Casaló, 2021[54]; OECD, 2020[3]). This can include developing individualised citizen profiles, generating and delivering tailored information, and shaping service offerings based on unique needs (UN, 2022[55]). By improving responsiveness, such efforts can make services more efficient, effective and citizen-centred, resulting in higher satisfaction, better outcomes and a more agile and proactive approach to meeting public needs.
This enables a better response to the needs of user sub-groups, including vulnerable and disadvantaged groups, who may have specific and context-dependent needs (Giest, 2017[56]). The use of AI tools for personalisation has broadened into several public service sectors, frequently associated with life-event-based service delivery, such as services offered proactively for a child’s birth, new educational pursuits or marriage (Kopponen et al., 2024[57]).
Governments can also tap into AI’s capacity to analyse vast behavioural datasets for a deeper understanding of individual heterogeneity, incorporating cognitive and contextual factors — such as timing, location and personal preferences — into service design. This enables more adaptive and equitable service interventions that align with citizens' unique circumstances while safeguarding autonomy and informed decision-making (Mills, Costa and Sunstein, 2023[58]).
In addition to enhancing services themselves, AI can help optimise communications on the availability and thus, the uptake of services. Using existing administrative data, AI can simplify processes by pre-filling (or eliminating) forms with known information and tailoring questions to the individual, reducing the time and effort required to complete bureaucratic tasks. This is already being done in a limited way with administrative data and (human) programmed algorithms in the area of social programmes (OECD, 2024[59]). This targeted approach can help ensure that citizens receive the information they need efficiently, and that interactions with public services are streamlined and user-friendly.
Examples of this benefit in action include a wide variety of chatbots and virtual assistants that can respond to unique queries from citizens with tailored information. For example, Singapore’s tax authority offers a public-facing chatbot that can provide information and services tailored to individual needs (Box 5.4). As another example, social protection systems are using AI for proactive outreach to promote service uptake among those who quality (Box 5.49).
It is important to note, however, that such services often require a great deal of data collection and analysis to determine individual characteristics and needs. Governments should undertake such data collections and AI use in a trustworthy manner. Otherwise, as discussed below, these efforts run the risk of infringing upon the free exercise of human rights, such as through privacy infringements.
Tailoring approaches to strengthen the civil service
AI can also strengthen the public service through more effective and inclusive hiring processes and personalised programmes for continuous development. With regards to human resource management (HRM), for instance, AI can help governments optimise hiring decisions by helping to find the best candidates for the job and improve inclusiveness by controlling for human biases.
AI can also empower public servants; it can contribute to learning development, enhance knowledge creation and optimise learning platforms for upskilling public servants. This includes crafting skills development strategies, designing and running tailored training courses and implementing tools to improve information access. By doing so, AI can help ensure that public servants are equipped with the latest knowledge and skills necessary to meet the evolving demands of their roles, fostering a more responsive public administration.
Some relevant AI applications include:
Developing learning material for civil servants: AI can create learning content (such as modules and course materials) from source documents and integrate diverse information into effective resources. It can also design, structure and deploy online learning courses. AI-driven tools can continually update and refine these resources as new information becomes available, enhancing the ongoing education and skill development of the government workforce.
Personalising material and learning routes for civil servants: AI can tailor educational content and learning pathways to meet the specific needs, preferences and progress of each civil servant, ensuring a more effective, engaging and adaptive learning experience. By equipping civil servants with the right skills more efficiently, this personalisation also enhances government responsiveness.
Identifying and cataloguing learning resources: AI tools can identify, describe and catalogue multiple learning resources, making them easily searchable and shareable, thereby simplifying resource discovery and identifying relevant content for learners. For example, integrating AI into digital platforms can enhance organisation, cataloguing and search functions.
As an example, the Australian Public Service Commission (APSC) has run a six-week pilot project to see if AI can design, structure and deploy an online learning course on digital skills for leadership (see Box 5.22). For further relevant discussion on this topic, see Chapter 5, section on “Civil service reform”.
Better decision-making, sense-making and forecasting
AI experts have identified better decision-making, sense-making and forecasting as the most important AI benefits overall, and they recommend stronger actions and further investments by governments in achieving these areas (OECD, 2024[14]). As discussed below, governments should pursue these benefits while being cautious to avoid over-reliance on AI, as flaws in systems can be difficult to identify, and overly deferring human judgement to machines could allow the systemic propagation of errors and cause real-world harm.
Enhancing decision-making and sense-making of the present
By providing actionable, data-driven insights, AI can help governments improve the effectiveness and efficiency of targeting actions, allocate resources and identify policy problems and solutions. Governments can therefore respond more effectively to emerging issues, can ensure informed policy development, increase their overall responsiveness and accountability, and ultimately, promote societal well-being.
Some specific ways in which AI can be beneficial throughout the policy cycle (Figure 1.3) include:
Agenda setting and policy formulation: AI can play a pivotal role bringing issues to the attention of policymakers and in the agenda setting process by framing social problems in ways that make them more responsive to actual social needs. For example, AI enables governments to monitor and make sense of emerging topics in real time from vast and representative datasets, enhancing the accuracy and speed of agenda-setting (Valle-Cruz et al., 2020[60]; Kolkman, 2020[61]). By detecting potential challenges more accurately and quickly, AI facilitates faster policy responses before issues escalate (OECD/UNESCO, 2024[12]; Höchtl, Parycek and Schöllhammer, 2015[62]). Through policy formulation, AI can influence the decision-making process by bringing important data and information about issues to the forefront (Valle-Cruz et al., 2020[60]). AI-enabled analysis can provide insights that estimate not only the likely impacts of policies, but also identify the target populations and make economic and social diagnosis, helping policymakers make informed decisions (Wirjo et al., 2022[63]; Ubaldi et al., 2019[9]). AΙ can also assist in devising policy alternatives, providing more in-depth ex-ante policy evaluation (Desouza and Jacob, 2014[64]).
Policy implementation. As policies move to the implementation phase, AI-driven automation, rapid data processing and real-time analysis enhance the quality, speed and efficiency of policy implementation. AI analytics notably strengthen and expedite the acquisition of data and information, supporting continuous improvements. Real-time data analytics can facilitate large-scale enhancements, ultimately improving the delivery of services during policy implementation (Valle-Cruz et al., 2020[60]; OECD/UNESCO, 2024[12]).
Oversight and evaluation. AI can monitor policy interventions in real time, providing better insights into the policy process, facilitating timely and accurate data assessments of policy interventions and enabling quick policy adjustments when needed (OECD, 2019[65]; OECD/UNESCO, 2024[12]).
Accessible data and increased computing power are providing AI with a competitive advantage over humans when it comes to decision-making in some cases (Green, 2022[66]). For instance, LLMs can support individual reasoning, and evidence shows real-world benefits from AI-assisted decision-making (Brynjolfsson, Danielle and Raymond, 2023[67]). AI systems can enhance decision =-making by mitigating reasoning mistakes and cognitive errors, helping humans filter out “noise” — the unwanted variability in human decision-making — and irrelevant influences that can lead to inconsistent and inaccurate decisions (Du, 2023[68]). 7 The potential for AI systems to make data-driven decisions is leading to its adoption across a range of sectors, including within the public sector. AI can identify and address elements that distort human judgment across various applications in government (Mills, Costa and Sunstein, 2023[58]).
Additionally, noise may obscure key insights into human behaviour, which AI can uncover and quantify, contributing to more precise policymaking (Aonghusa and Michie, 2020[69]). Algorithms can significantly reduce noise by ensuring consistency in outcomes regardless of contextual factors, such as mood or time of day. By uncovering previously undetected or obscured behavioural patterns, AI can allow policymakers to better understand systemic trends and decision-making inconsistencies (Ludwig and Mullainathan, 2022[70]). While eliminating noise does not address all mistakes or errors, it enhances reliability, reducing arbitrary disparities in decisions across government functions like justice administration and public service delivery (Sunstein, 2023[71]).
As AI becomes more embedded in public administration, understanding the psychological and cognitive processes that shape human interactions with these technologies is increasingly important. Behavioural public administration can provide further valuable insights into the challenges that may arise, offering strategies to mitigate them and improve governance and decision-making (Alon-Barkat and Busuioc, 2024[72]).8 An example of this is governments’ use of Polis, an open-source civic engagement tool for understanding citizen views as well as areas of consensus and disagreement on public policy matters (Box 5.36). In the field of public financial management, Korea developed dBrain+, an information system that leverages AI to analyse real-time economic, fiscal and financial data to optimise risk assessment and decision-making in public finance (Box 5.8).
Better forecasting of the future
AI systems can process large volumes of data from multiple sources, including unstructured data, and identify complex patterns and weak signals — early signs of potential emerging changes, threats and opportunities — that are not easily detectable with existing methods. This can enhance the accuracy and timeliness of predictions and can be highly beneficial to other strategic foresight activities (Fitkov-Norris and Kocheva, 2025[73]). AI predictive analytics and forecasting in government involves using algorithms to anticipate future trends and risks. It can be widely applied across various domains, such as forecasting macroeconomic and fiscal outcomes (e.g. “nowcasting” GDP, as discussed in Chapter 5, section on “AI in public financial management”).
By providing accurate and timely forward-looking insights, AI enhances decision-making, resource allocation and the overall effectiveness of government operations. Some relevant AI applications include:
Predicting future service needs: AI has important potential in generating predictive analytics and forecasting, allowing public services to anticipate needs and be proactive. AI can help forecast future service needs, optimise resource allocation and enhance responsiveness across various policy domains by enabling the analysis of historical data and trends.
Regulatory forecasting: regulators can use AI to uncover emerging trends and shifts in various industries to proactively plan regulatory responses. By continuously monitoring and analysing data from multiple sources, such as market reports, social media and news articles, AI can identify new developments and technological advancements that may impact regulatory landscapes.
Disaster risk management: AI can also help forecast natural disasters by analysing historical data and current trends. For example, AI systems can analyse satellite imagery and other data to predict the likelihood of natural disasters like wildfires and earthquakes, allowing for proactive measures to minimise damage and enhance public safety (Sun, Bocchini and Davison, 2020[74]; Gupta and Roy, 2024[75]). AI systems can offer early warnings and useful information to help governments respond promptly to such events and mitigate their effects.
Anticipating corruption and fraud risks: predictive AI systems are helping integrity actors to prioritise cases for further human examination. Although the transfer of these approaches from research to governments is still limited, it is growing steadily. For instance, AI systems can be used to prioritise risky cases and streamline auditing processes. They can also support anti-corruption policy targeting by providing early warning systems that predict public corruption based on data such as economic and political factors. Predictive techniques are also key to several AI-enabled government accountability and oversight activities, such as risk-based fraud detection, as further discussed below.
As an example of predicting future public service needs, Portugal’s PrevOcupAI system aims to predict work-related illnesses and connected risks factors in the public administration to minimise disruptions (see discussion in Chapter 5, section on “AI in public service design and delivery”).
Improving information management and accessibility
Robust, quality data is an underlying pre-requisite for enjoying AI’s benefits. AI can help to maximise the quality and utility of data, as well as the ability of humans and machines to process and analyse it (Jarrahi et al., 2023[76]). For instance, AI systems enable new forms of data collection, including automatically detecting and identifying items in images, audio recordings or video. The capabilities and prevalence of AI-enabled sensing devices have progressed rapidly, allowing automatic speech transcription, motion detection, live image recognition and a wide range of tasks that previously required human labour (Zhang, Wang and Lee, 2023[77]; OECD, 2023[31]). AI can also improve how information is stored, disseminated and applied. This is particularly visible with the integration of AI into knowledge management systems (Sanzogni, Guzman and Busch, 2017[78]).
Improving how information is managed internally within governments can also help governments provide open information and data to the public. The use of AI can help minimise errors in data management, such as by reducing manual effort, and when used in combination with Privacy-Enhancing Technologies (PETs), can help enhance the privacy and protection of sensitive personal data and information (OECD, 2025[79]). This, in turn, enables the wider publication and access to open data. For instance, in justice administration, an AI-powered anonymisation engine can automatically identify and protect personal data within court decisions (Box 5.64), preparing them for public release as part of an open data initiative. The OECD (2023[80]) is also exploring the use of PETs, which are digital solutions that allow information to be collected, processed, analysed and shared while protecting data confidentiality and privacy.9
Internal government virtual assistants for civil servants provide a good example, with France’s Albert and the UK’s Caddy providing a wealth of historical and cross-government information at public servants’ fingertips to inform decisions and help respond to citizen inquiries (Box 5.46).
Enhanced accountability and anomaly detection
One of the most longstanding uses of AI in government is to detect existing anomalies, such as fraud, or forecast future integrity risks, thus enhancing accountability and integrity in public programmes. Extra care in this use of AI should be taken to avoid possibly damaging outcomes in results.
Detecting improper transactions and assessing integrity risks
Such activities are common in a variety of government functions, including public procurement, tax administration and public financial management. Fraud and improper payments in government programmes can be significant sources of financial leakage. For instance, in the United States alone, the federal government loses an estimated USD 233-521 billion annually to fraud (US GAO, 2024[81]).
ML algorithms are particularly effective at pattern recognition, as they enable the analysis of large datasets and detect data outliers, hidden relationships (e.g. indicating collusion) and other anomalies that require further human investigation. Without the capacity to analyse data at scale or the capability to identify hidden patterns, such irregularities may go otherwise unnoticed without AI. This use of AI can enhance the ability of government organisations to maintain integrity and accountability. For example, France’s tax authority uses AI to analyse arial photography and identify undeclared properties (Box 5.1).
Governments can use AI to better identify, evaluate, predict and respond to potential integrity risks, enabling better management, mitigation and timely interventions. For example, in regulatory compliance, inspectors are increasingly using AI to assess the risk posed by private operators. This improves the targeting of inspections, protection of public interests and efficient use of resources (OECD, 2018[82]; 2021[83]). AI assists inspectors by detecting patterns indicative of potential non-compliance, allowing for more accurate risk assessment. This use of AI not only streamlines inspections but also enhances the overall effectiveness of regulatory frameworks by ensuring that resources are directed where they are most needed.
Enabling non-governmental actors to understand and engage with government and promote accountability
Governments can use AI to be more transparent and to enable new forms and channels of interaction between citizens, civil society organisations (CSOs) and public institutions. In fact, AI experts suggest that empowering citizens, CSOs and social partners (e.g. trade unions) is one of the 10 most important benefits that AI can yield. This is underpinned by government transparency and use of AI (OECD, 2024[14]). They are:
offering AI-enabled tools leveraging open government data (OGD) directly to citizens to help them navigate and make sense of government processes and actions
enabling third-party oversight and scrutiny of government operations by CSOs and other non-governmental actors
providing engagement opportunities and channels where the public can provide feedback and raise potential issues about government performance and decisions.
If executed well, this use of AI has the potential to promote accountability and public integrity, strengthen policymaking and increase citizen trust in government. This benefit — including real-world examples and discussion on the steps governments need to take to achieve it — are discussed in-depth in Chapter 5 in the section on “AI in civic participation and open government”.
Unlocking opportunities for external stakeholders through AI as a good for all
A final benefit to the use of AI in government is unlocking new opportunities for external stakeholders, such as businesses and citizens, by providing them with access to government-developed AI systems. This benefit of AI may not be as directly observable in direct government activities as the other benefits. But it has the potential to improve public governance through increased trust in government, a more informed and skilled citizenry, or even economic growth. Countries have put in place OGD programmes not only to promote transparency and accountability, but also to promote entrepreneurship, economic growth and the creation of value that may not have necessarily been understood or foreseen in their onset (OECD, 2018[84]). For instance, Landsat satellite image data freely released by the US Geological Service since 2008 now results in over USD 25 billion annually in economic value (USGS, 2024[85]). When the OGD movement began in the late 2000s and early 2010s (Chignard, 2013[86]), it was not fully realised that the data made available would serve as vast resources for training AI systems a decade later.
AI’s nature is distinctly different from that of data in that it is not a natural byproduct of an array of existing functions and activities, and it cannot serve as a raw input for other processes. (For example, data has been compared to fuel, electricity or drinking water for AI). Yet governments’ use of AI has the potential to generate public good by empowering stakeholders to enhance their capabilities and access to information to derive new value.
Unlike other applications — where AI facilitates interactions between government and citizens, or improves information provision and accessibility — certain specific uses of AI in public governance can augment the capabilities of these actors, enabling them to achieve their missions and objectives more effectively. For example, the government of India has made AI systems available to farmers to help them ensure crop health and mitigate pest-related challenges (Jeevanandam, 2024[87]). Another example is Germany’s FAIR Forward – Artificial Intelligence for All development initiative, which supports open creation and responsible usage of AI systems on areas such as agriculture, climate protection and citizen participation (OECD, 2023[88]). Such approaches may be particularly useful in markets not yet served by, or with little appeal for, private sector solutions. Governments may have the resources to invest in under-explored fields and can take the first steps to build out new markets, taking risks where others may not be ready or willing to.
The potential for positive external opportunities would be amplified further if governments were to provide access to enablers for AI, such as digital infrastructure in the form of computing power (Ho, 2023[89]) (see Chapter 4, section on “Building out digital infrastructure”). In some cases, these enablers may already exist for government use, only requiring some adjustments and scaling to make them available to a broader audience. In other cases, enablers targeting external actors could be developed and supplied. This can democratise AI’s potential value (OECD, 2024[14]).
Non-governmental entities can also participate in the creation and deployment of AI tools, playing leading roles in fostering technology-enabled participation (OECD, 2025[90]).
To seize the benefits of AI, its risks need to be managed
Copy link to To seize the benefits of AI, its risks need to be managedThe global adoption of AI in all sectors raises questions about trust, fairness, privacy, safety and accountability, among others. Considering these issues and managing AI’s risks can have an impact on its adoption and the realisation of its benefits (Tse and Karimov, 2022[91]). AI poses hundreds of risks,10 and experts identify some of the most important for society as (OECD, 2024[15]; [14]; [92]; 2022[93]):
Possible adverse outcomes for some groups or individuals if AI systems are underpinned by inadequate or skewed data.
AI systems lacking sufficient transparency and explainability erode accountability.
Certain AI uses raise data protection, privacy and surveillance concerns.
AI can facilitate increasingly sophisticated cyber threats.
Minor to serious AI incidents and disasters could occur in critical systems.
AI could contribute to labour market disruptions.
AI computational power requires very significant energy use.
These are thorny issues that are being considered by governments, companies, CSOs and other relevant stakeholders. Efforts such as the adoption of the European Union (EU) AI Act (2024[94]) and a variety of national-level governance initiatives help to illustrate how governments are taking an active role in governing AI for society.11
Risks specific to AI use in government
Perhaps all AI risks could implicate governments in some way, such as necessitating governance processes or mitigation measures. Yet a narrower subset of risks is most relevant for policymakers and public servants as they pursue the strategic and responsible use of AI in government. As mentioned above, governments have an outsize influence on people’s lives, thus their use of AI can have a greater impact on the public in both positive and negative ways. Accountability expectations are, therefore, higher for government use of AI. This can be seen explicitly in the EU AI Act, which classifies many public sector AI use cases as “high-risk”, and some as “unacceptable risk” and therefore banned in the EU (Box 1.2). The United States takes a different approach, considering some use cases as “high-impact”, and thereby requiring certain risk management practices (Box 1.3). Korea’s Basic Act on the Development of AI and the Establishment of Foundation for Trustworthiness (“AI Basic Act”) (2024[95]), which will take effect in January 2026 and applies to organisations developing or using AI in the Korean market, similarly designates some uses as “high-impact”, thereby requiring enhanced measures for ensuring AI safety and reliability. The AI Basic Act also includes a separate designation for generative AI that includes specific transparency and disclosure requirements.
Box 1.2. The EU AI Act’s risk levels as related to AI in government
Copy link to Box 1.2. The EU AI Act’s risk levels as related to AI in governmentThe European Union (EU) AI Act is a regulation on AI that entered into force in August 2024. The regulation establishes obligations for AI based on its potential risks and level of impact. The Act identifies four different levels of risks that are relevant for governments’ use of AI.
Unacceptable risk. AI uses under this category are prohibited. Examples include predictive policing, real-time remote biometric identification (including facial recognition) in public spaces for law enforcement, social scoring and assessing the risk of an individual committing criminal offenses. Law enforcement and justice administration are among the functions of government most concerned by this category. However, some exceptions apply, such as use cases concerned with national security and those remaining subject to judicial oversight.
High-risk. AI uses under this category are allowed but regulated due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law. Due to its potential impact on these aspects, many government uses of AI might fall under this category. Examples include systems used to influence the outcome of elections and voter behaviour, automated processing of personal data to assess various aspects of a person’s life, assessing eligibility for benefits and services, and safety components used in the management and operation of critical infrastructure. Obligations include establishing a risk management system, conducting data governance, setting up technical documentation to demonstrate compliance and mandatory fundamental rights impact assessment.
Limited risk. These uses might include systems intended to communicate with individuals, such as chatbots, as well as systems that generate content such as text and images. Transparency obligations require developers and deployers to ensure that end users are aware that they are interacting with AI.
Minimal risk. These systems are unregulated, but a code of conduct is suggested. Examples include video games and spam filters.
Source: (EU, 2024[94]).
Box 1.3. The concept of “high-impact” AI in United States (US) policy
Copy link to Box 1.3. The concept of “high-impact” AI in United States (US) policyIn the United States, the policy M-25-21 Accelerating Federal Use of AI through Innovation, Governance and Public Trust establishes a dualistic approach for AI use cases in government: either the AI use case is considered “high-impact”, or it is not.
High-impact AI is AI with an output that serves as a principal basis for decisions or actions with legal, material, binding or significant effect on:
an individual or entity's civil rights, civil liberties or privacy; access to education, housing, insurance, credit, employment, critical government resources or services; or other programmes
human health and safety
critical infrastructure or public safety
strategic assets or resources, including high-value property and information marked as sensitive or classified by the federal government.
Federal agencies are responsible for conducting reviews on their AI use cases and determining the applicability of the high-impact definition. The policy provides examples of 15 categories of AI use cases that are presumed to be high-impact.
Agencies must implement minimum risk management practices for high-impact AI use cases. The minimum risk management practices include the following: conducting pre-deployment testing and an AI impact assessment; ongoing monitoring for performance and potential adverse impacts; ensuring adequate human training, assessment, human oversight, intervention and accountability; offering consistent remedies and appeals; and consulting and incorporating feedback from end users and the public. Agencies must have a plan to discontinue the use of any high-impact AI system that is not performing at an appropriate level in compliance with the policy, until actions are taken to achieve compliance.
Limited pilot programmes should follow minimum risk management practices where practicable. However, if the CAIO certifies and other criteria are met as detailed in the source document, pilot programmes are exempt from the minimum risk management practices. Agency CAIOs may also waive one or more minimum risk management practices under certain circumstances for a specific use case, though they must certify the ongoing validity of each determination and waiver annually, track them centrally and publicly release a summary of each.
As they seek to develop and use AI, governments face risks that include potential dangers and threats that could cause serious problems for individuals and society (Valle-Cruz, Garcia-Contreras and Gil-Garcia, 2023[96]), potentially undermining public trust, the legitimacy of government’s AI use and even democratic values. To address these concerns, it is important to identify and manage these risks, consider how AI systems may impact citizens or marginalised populations differently, help ensure the equitable distribution of AI benefits and mitigate potential harm. The continuous consideration of potential risks is important because known risks can evolve and new risks can emerge, including ones previously considered to be outside the realm of possibility.
This report identifies five general types of risks for the use of AI in government, as presented below. Beyond grappling with these risks, governments also face a number of implementation challenges when seeking to develop and use AI. These implementation challenges are discussed in Chapter 3.
Ethical risks: These include AI uses that undermine the free exercise of human rights and freedoms, including privacy, potentially infringing on human-centred values either deliberately or inadvertently. AI algorithms can introduce ethical risks from the digital realm to the physical world through biased algorithms and unethical behaviours like invasive surveillance. Key concerns include threats to trust, fairness, freedom, dignity, individual autonomy and labour rights.
Operational risks: These include technical and operational failures that might affect data privacy, the quality of AI outcomes and internal government operations due to cybers threats, unintended consequences, hallucinations, systematic errors and overreliance on AI systems.
Exclusion risks: These risks relate to gaps that arise when citizens without access to technology or digital literacy can be left behind and unable to benefit from AI advancements in public services.
Public resistance risks: These include public resistance to government use of AI. This can be driven by distrust in government AI systems or processes, or by the spread of false or misleading information about how AI is implemented in public administrations and its potential impacts.
Risks of inaction: Although often overlooked, this risk includes government delays in using AI to yield positive benefits. This can result in significant financial and non-financial costs — which could have otherwise been avoided with AI’s successful adoption — and a widening gap between public sector and private sector capabilities.
Ethical risks
Inadequate or skewed data in AI systems
AI systems have the potential to perpetuate or generate adverse or harmful outcomes, stemming from incomplete or inadequate data, as well as how AI usage intersects with institutional and social practices that are human-centred or systemic in nature.
It is important to recognise that algorithms do not operate autonomously; they are shaped by human choices at every stage, from model selection and training data to fine-tuning and parameter adjustments. Since AI systems usually learn from human-generated data, they inevitably reflect existing social outlooks and behaviours. Moreover, in government, algorithms rarely make decisions independently — they typically serve as tools that inform and influence human decision-making rather than replace it. Even where policies aim for standardisation, successful implementation still heavily depends on local context and the "messy engagement of multiple players with diverse knowledge" (Davies, Nutley and Walter, 2008[97]). This interplay between AI and human judgment means that errors can persist not only in algorithmic outputs but also in how AI-generated recommendations are interpreted and applied. Understanding how decision-makers process and respond to AI-generated information and ensuring they have the skills to use AI in a trustworthy manner are therefore critical. On one side, cognitive subjectivity remains relevant in shaping human-AI interactions despite AI’s perceived neutrality, while on the other, machines lack empathy. Therefore, it is important that public servants can use professional judgment and practical wisdom to ensure fairness by exercising judicious discretion.
AI systems are also highly sensitive to the quality of training data, making them susceptible to overfitting patterns,12 learning spurious correlations and amplifying errors or skew embedded in human-generated datasets. While AI call help eliminate noise, as discussed as a benefit above, its pattern-seeking nature can also exacerbate it, particularly when systemic inconsistencies exist in the training data. Without rigorous oversight and mitigation, AI risks reinforcing distortions rather than delivering objective, reliable decisions, highlighting the critical need for careful data curation and validation, system testing, anticipatory and retrospective impact monitoring and assessment, and algorithmic transparency in AI-driven decision-making (Shane, 2019[98]).
Because of its potential to have systemic impacts, insufficient or skewed data and algorithms can also exacerbate other types of risks, such as those discussed below.
Misuse or questionable use of AI, resulting in surveillance and privacy concerns
The misuse or questionable application of AI in government could result in harm to the free exercise of individual freedoms and rights. A prominent concern is the use of AI in delivering public security and safety services, where it enables efficient identification and tracking through biometric data and real-time monitoring. While these tools can be useful for law enforcement and crime prevention, they also raise concerns about data privacy, as well as surveillance and social control by public administrations. For instance, during the COVID-19 pandemic, AI was employed to track people's movements to ensure compliance with self-isolation mandates (Saheb, 2022[99]). Although this measure was intended to control the spread of the virus, it raised privacy concerns among the public (OECD, 2020[100]). Indeed, AI-driven surveillance in increasingly pervasive, with the Carnegie Endowment for International Peace (2022[101]) finding that 97 of 179 (54%) countries analysed are using AI technologies for public surveillance.13 The index identified non-democratic states as a major driver of AI surveillance, including through selling products to other countries. Yet others, including liberal democracies, are also major users of AI surveillance (Saheb, 2022[99]). As seen in Box 1.2, the EU AI Act regulates when the use of AI for surveillance is admissible, and it identifies use cases that are an “unacceptable risk” and banned throughout the EU.
Similarly, personalised service delivery often requires algorithmic processing of data scattered across multiple public and sometimes private data sources (Nikiforova et al., 2023[102]). This raises privacy concerns and highlights the need for governments to pursue both efficient service delivery and the robust protection of individual privacy rights. For instance, to provide personalised social services, governments might aggregate data from healthcare records, educational achievements, employment history and even social media activity. While the goal is to offer tailored support and timely interventions, processing such vast amounts of personal information needs to be done carefully and deliberately, with controls in place to mitigate concerns of surveillance. Such guidance may be found in (OECD, 2017[103]) and (OECD, 1980[104]). Robust policies and infrastructure are needed to consider trade-offs between responsiveness, transparency and the protection of sensitive information (OECD, 2024[105]; [59]).
Another misuse case is social scoring in service delivery or policymaking, a practice where individuals are classified based on behaviour or personal traits. AI algorithms analyse data from sources like social media, financial transactions and public records to assign scores. These scores can impact access to services, loans and employment opportunities, leading to unfair treatment. Government use of such systems is banned in the EU as an unacceptable use (EU, 2024[94]).
In a recent survey of hundreds of experts across fields, 79% said that AI will have a negative impact on people’s privacy by 2040, a concern shared by the general public (Rainie and Anderson, 2024[106]; Fazlioglu, 2024[107]). Governments will need to ensure their use of AI is trustworthy to allay these concerns as it related to AI in government.
The potential misuse of AI tools for citizen surveillance by authorities can lead to overreach and abuse of power. Continuous monitoring and data collection can create a climate of fear and mistrust, particularly among communities who may already feel disproportionately targeted by law enforcement (UN, 2024[108]). Governments can also use AI to strengthen political power, potentially facilitating wide-scale subjugation and authoritarianism (OECD, 2024[14]), especially by non-democratic governments or those that do not prioritise the protection of human rights. Some experts argue that AI could be — and in some instances already is being — used to track and monitor citizens and residents at scale, using algorithms and behavioural analysis, identify and supress opposition, and perpetuate totalitarian regimes (OECD, 2022[109]; Tegmark, 2017[110]; Clarke and Whittlestone, 2022[111]; Byler, 2021[112]).
Finally, algorithmic manipulation, where AI systems and their results are altered to produce specific outcomes, is another potential misuse and unethical behaviour in the governmental use of AI (Valle-Cruz, Garcia-Contreras and Gil-Garcia, 2023[96]). This manipulation can stem from individual public servants, decision-makers or AI system developers intentionally altering system results to benefit or harm certain groups or individuals or push a specific agenda. AI’s inherent complexity can additionally make it challenging to trace and understand algorithms' inner workings.
Lack of transparency and explainability
Systems based on deep learning are “black boxes”, meaning that it is difficult to describe how they produce a given output. Such outputs are indirectly generated from deep learning training as engineers continually tweak parameters until the model scores highly on training objectives (Clarke and Whittlestone, 2022[111]). Even scientists working on advanced deep learning models do not understand the inner workings of their systems, and they find it hard to trace back these outputs and test the reliability of these systems through traditional methods (OECD, 2022[109]).
This makes it hard to detect and mitigate harmful outcomes and produces challenges in determining accountability when issues arise. As AI systems become increasingly integrated into functions of government, black box systems could make it difficult to explain the rationale for AI-assisted decisions to citizens. It can also exacerbate other AI risks. For instance, it can be more difficult to identify algorithmic bias and its root causes in opaque systems. Public servants could also increasingly have a false sense of trust in seemingly efficient yet flawed AI systems because such flaws may be unobservable (contributing to automation bias, discussed below) (OECD, 2024[14]; Russell, 2019[113]). This can erode government accountability and disempower the public by limiting their ability to make informed decisions or potentially making them subject to opaque, flawed AI-driven decisions (Lima et al., 2022[114]).
Operational risks
“Automation bias” – overreliance on AI
Many people perceive AI systems and their decisions to be neutral and impartial, leading users to accept results without scrutiny. Studies on AI-assisted decision-making have identified a tendency to overweight algorithmic recommendations, often assuming their prediction to be more reliable than human judgment — even when the AI system itself has limitations (Alon-Barkat and Busuioc, 2024[72]). This “automation bias” (the propensity for people to trust AI outputs because they appear rational and neutral) can lead to the application of AI decision-making to a growing number of societal challenges (Horowitz, 2023[115]; Alon-Barkat and Busuioc, 2022[116]) — perhaps to avoid difficult conversations and decisions about human approaches to these issues. Some experts assert that this habit is creating “blind faith” in technology, a problematic phenomenon that can reinforce existing systemic issues against certain groups or individuals and contribute to neglect of human suffering and erosion of empathy (Goldman, 2023[117]; Olson, 2023[118]).
So-called “automation bias” in government occurs when public organisations or civil servants rely too heavily on AI systems for decision-making or task execution. For example, if healthcare professionals rely too heavily on AI-automated suggestions without cross checking, they might miss critical information or make incorrect diagnoses. This excessive dependence can result in users failing to recognise mistakes, accepting incorrect AI outputs, and diminishing human oversight and judgment (Passi and Vorvoreanu, 2022[119]; Klingbeil, Grützner and Schrec, 2024[120]). AI could be systematically adopted without fully assessing accuracy and potential consequences, leading to reliance on AI systems and the propagation of compounding errors throughout entire systems (Valle-Cruz, Garcia-Contreras and Gil-Garcia, 2023[96]).
One contributor to such issues are hallucinations, which occur when generative AI systems make up facts in a credible way, often when a correct answer is not found in the training data. This can be harmful in contexts such as government decision-making, where they may lead to misguided decisions or actions (OECD, 2024[15]; Beltran, Ruiz Mondragon and Han, 2024[121]). For example, an AI system designed to answer queries from the public could provide erroneous details about public services or list services that do not exist. If not carefully mitigated, these unintended consequences and incorrect outputs can scale rapidly, affecting large populations or critical internal government decisions.
AI systems can also be prone to simple errors and malfunctions, which can lead to systematic deviations and imprecision in algorithmic outputs. If these systems are not carefully implemented or fail to be technically reliable, they risk diminishing public trust, including that of civil servants. Citizens may experience frequent frustrations due to incorrect processing of their requests or delays in service delivery. For example, automated systems that handle form submissions, customer service inquiries and payment requests need to operate with great accuracy; otherwise, mistakes or technical glitches can lead to a perception of incompetence and unreliability in public services. Technical problems or errors in chatbot interactions, such as incorrect responses, inability to understand queries or system outages, can further erode trust, making citizens feel that AI systems are unreliable and ineffective.
Risk aversion also plays a role in “automation bias”. Civil servants may be afraid to take personal responsibility for decisions for fear of getting in trouble later. An example is when a human decides against the advice of an AI system and it turns out to be the wrong decision. In such cases, the civil servant will seem doubly responsible. As seen in Box 5.43, some government AI systems have even been designed to make public servants write justifications for review if they go against an AI system’s advice, adding to the burden of their work and reinforcing incentives to follow what the system recommends.
Research also highlights the concept of selective adherence, where decision-makers are more likely to follow algorithmic recommendations when they align with their pre-existing beliefs or societal stereotypes (Alon-Barkat and Busuioc, 2022[116]). This can lead to distorted public decision-making, as civil servants may unconsciously use AI outputs to justify pre-existing biases rather than critically evaluating them.
Some research suggests that overreliance on AI may contribute to a decline in human cognitive abilities, reducing exploration, creativity and independent thought, as individuals become accustomed to AI-generated solutions. Studies suggest that frequent reliance on AI-driven decision support systems can lead to cognitive offloading, reducing individuals’ engagement in critical thinking and independent problem-solving (Gerlich, 2025[122]). AI-driven decision-making also risks promoting behavioural homogenisation, as its outputs often reflect limited diversity of perspectives. This narrowing of perspectives could hinder adaptive thinking and reduce the capacity of societies and governments to navigate uncertainty and risk (Meng, 2024[123]). Although over-reliance on AI may pose risks, AI tools can also help human operators interpret and question complex AI decisions, discouraging overreliance (OECD, 2024[15]).
Reduced job quality for public servants
While AI can enhance public servants’ job quality and well-being, as discussed above, some uses can have the opposite effect. The use of algorithmic management tools is increasing significantly, reaching an adoption rate of 90% in US firms and 79% in the EU (Milanez, Lemmens and Ruggiu, 2025[124]). While public sector-specific studies have not been conducted, tangible concerns have been raised about existing negative impacts of AI and algorithmic tools on job quality, including work intensification, increased stress, perceived reduction in fairness and workplace surveillance (OECD, 2023[25]). For instance, AI could make jobs less fulfilling and more stressful by incentivising new types of surveillance in the workplace, or new forms of hyper-efficient yet exhausting “digital Taylorism”, in which work is subject to increased surveillance and regulation, including through algorithmic management (UC Berkeley, 2021[125]; EC, 2025[126]).14 Further research has shown that such AI surveillance can harm mental health (APA, 2023[127]), and that AI task management can erode the autonomy and voice of workers, reducing human insights into how work is managed (Gmyrek, Berg and Bescond, 2023[128]). However, if used well, such tools have also been shown to improve worker safety and well-being (e.g. by alerting workers about dangers and hazards, or identifying burnout) (EC, 2025[126]). Algorithmic management and its impacts are already being seen and studied with regard to their use in the private sector (OECD, 2023[25]). Research on algorithmic management in the public sector is scarce, although its use is expanding rapidly (EC, 2025[126]).
Privacy and data governance tensions
Developing and deploying AI systems poses privacy and data governance challenges throughout the AI lifecycle (OECD, 2024[129]; forthcoming[130]). In the AI training stage, many developers depend on publicly accessible sources for building AI training datasets, which may purposefully or inadvertently include personal data or information subject to intellectual property rights. However, just because data is accessible on the Internet does not automatically mean that it is free to be collected and used to train AI models. Further, people may have shared their personal data consenting to another use or uses, which do not necessarily include training AI models (ICO, 2023[131]). The collection of personal data for training AI systems, like any data processing activity, is subject to the commonly recognised privacy principles set forth in the OECD Recommendation Concerning Guidelines Governing the Protection of Privacy and Transborder Flows of Personal Data (OECD Privacy Guidelines) (1980[104]). These principles require that personal data be obtained through lawful and fair means, with the knowledge of the data subject, and that any further uses of the data are not incompatible with the original purposes.
Another important aspect to consider is the capacity of AI models to memorise personal data within their parameters during the training stage. As a result, LLMs behind text-based generative AI tools pose a particular risk of unauthorised access and use of third-party personal data without the knowledge of the individuals concerned (Brown et al., 2022[132]). Some research also shows that generative AI models are able to infer personal attributes of the data subject from text with high accuracy, yet at a low cost (Staab et al., 2023[133]). This raises privacy concerns not only because these inferences can reveal personal information or personal characteristics, especially when such traits were not intended to be shared.
During the deployment stage, AI systems can also be in tension with individuals’ rights to access, correct, and where necessary, delete their personal data (also known as the “Individual Participation Principle” in the OECD Privacy Guidelines). For example, fulfilling individuals’ rights to have their data deleted or corrected can be technically complex and resource-intensive, as it might require identifying specific data points related to an individual within unstructured datasets or in some cases re-training the AI model. Additionally, research conducted prior to the widespread use of generative AI models suggests that, in certain cases, it is possible to reconstruct or deanonymize original training data by analysing the behaviour of a model that includes that data (Salem et al., 2018[134]). To address such tensions, promoting further international cooperation between data privacy and AI communities can contribute to harmonised data practices with AI development and use. For example, OECD’s Expert Group on AI, Data, and Privacy15 is exploring policy responses on data governance and privacy in the context of AI, involving experts from multiple sectors and disciplines around the world.
Cyber threats
AI systems incorporated into government processes can be vulnerable to cyber threats, which can lead to data breaches, privacy violations and loss of functionality. The extensive collection and analysis of personal data for AI applications can result in loss, alteration or unauthorised disclosure of this data, infringing on individual privacy rights (Beltran, Ruiz Mondragon and Han, 2024[121]). Unauthorised access and data breaches can compromise personal data and operational integrity, which could result in identity theft, financial fraud and other privacy violations, undermining public trust in government institutions and resulting in legal consequences. Additionally, malicious cyber actors could manipulate AI systems, altering their outputs and causing erroneous decisions or actions (Brundage et al., 2018[135]; Gopireddy, 2024[136]). Cyber risks may originate from external bad actors, or from insider threats within government (Eshelby, 2025[137]). Overall, cybersecurity can be seen as a horizontal function of government in itself, and indeed represents one of the areas of greatest AI adoption in government for enhancing security of government IT systems (Mariani, Kishnani and Alibage, 2025[138]). However, this function is highly specialised and not included in the primary scope of this report, and it is not a topic of thorough analysis and discussion. However, the OECD has an ongoing workstream on digital security that has conducted relevant work.16
Exclusion risks
Exacerbating digital divides
The risk of omitting people when using AI in government is closely linked to digital divides (UN, 2024[108]), the gap between those who can access and use information and communication technologies and those who cannot. This is particularly evident in public service delivery and policymaking activities, especially in the use of AI for predictive analytics, forecasting and service personalisation.
Digital divides risks hindering access to the benefits and public value that AI offers. This is especially the case among populations lacking the necessary infrastructure and digital literacy to engage with AI-driven public services (OECD, 2024[14]; ITU, 2023[139]). AI in government can offer advantages — like tailored information, faster response times and enhanced service delivery — but may be inaccessible to those without internet access or digital skills. Governments’ shift to digitalisation can intensify the barriers to digital government services among certain segments of the population. Research in Canada found that users in rural locations, women and girls and those in low-income households were negatively affected by the push towards digital-first services due to the COVID-19 pandemic (Singh and Chobotaru, 2022[140]). In Norway, the digitalisation and automation of the system for awarding child benefits made it possible for most recipients to receive the benefit automatically. At the same time, it resulted in the need for other recipients to apply manually, a burden that disproportionally affected low-income segments (Larsson, 2021[141]).
Data divides are another form of exclusion caused by AI that is linked to unrepresentative data, a consequence of existing digital divides. Individuals without internet access are often absent from the data used to develop algorithms. Because most data are predominantly collected through the internet and online interactions, the widespread use of AI may exclude these individuals, as algorithms used to inform policymaking lack representative data. These gaps make it difficult for governments to respond adequately to the needs of all citizens, resulting in insufficient or inadequate data sets in AI-driven decision-making processes and service delivery. For instance, data divides limit the potential for AI benefits, such as personalised AI services, leaving them only useful and accurate for data-rich populations (UNESCO, 2019[142]; Perry and Turner Lee, 2019[143]; Dieterle, Dede and Walker, 2022[144]).
Another form of digital divide involves underrepresentation of languages (Röttger et al., 2024[145]; Peixoto, Canuto and Jordan, 2024[146]). Training datasets tend to overrepresent widely used languages, as seen in Figure 1.4. This imbalance can lead to AI systems that fail to serve non-dominant language groups effectively. Language preservation efforts, such as Estonia’s Donate a Speech, reduce the current limitations of speech technology, which favour the most widely recognised languages, and enhance service delivery (OECD, 2023[38]). Additional considerations and examples are discussed in Chapter 4, section on “Creating a strong data foundation”.
Figure 1.4. More than half (59%) of open-source AI training datasets are in English
Copy link to Figure 1.4. More than half (59%) of open-source AI training datasets are in EnglishPercentage breakdown of languages for open-source AI training datasets on Hugging Face
Note: This chart represents the language distribution of all datasets. Multilingual and translation datasets on Hugging Face contain more than one language and are thus double counted. More methodological information available at: https://oecd.ai/huggingface.
Source: OECD.AI (2024), visualisations powered by JSI using data from Hugging Face. Last updated on 5 June 2025, (accessed 16 June 2025).
Public service workforce displacement
The OECD (2023[25]) found that while AI is capable of automating non-routine tasks, its future impacts on labour are ambiguous; they depend on the balance between the displacement of human labour by AI, the increase in labour demand due to greater productivity brought about by AI and the creation of new jobs caused by AI adoption. AI can enhance government productivity by automating tasks, and it also has the potential to reduce the need for human labour, resulting in the need to re-skill public servants to take on more meaningful tasks (Peixoto, Canuto and Jordan, 2024[146]).
At the same time, AI deployment is driving a growing demand for AI-related skills across the economy, and the public sector is no exception. As AI-related roles increase, hiring and retention efforts for non-AI or traditional positions may decline, highlighting the need for a workforce skilled in AI and related technologies to meet evolving demands (Acemoglu et al., 2022[147]).
AI technologies impact a wide range of occupations and sectors, affecting workers of all skill levels and influencing labour markets (OECD, 2023[25]). While some government workers may adapt to AI and even see their work enhanced, others, such as older and low-skilled workers conducting tasks that are easy to automate, face significant risks. This suggests that AI’s benefits are not evenly shared among public servants (Milanez, 2023[148]). For instance, AI-powered chatbots are now commonly used for service delivery and citizen-centred communication to answer basic questions and provide information, which reduces the demand for government human customer service representatives (Acemoglu, 2024[149]).
For many years, concerns about job automation focused on low-skilled labour, but high-cognitive tasks are increasingly being assisted by AI, which can impact civil servants such as policy analysts. Generative AI can produce meaningful text, conduct data analysis and even propose policy strategies to address complex challenges. Many civil servants will need to be trained to collaborate effectively with AI and focus on higher level strategic thinking and decision-making.
Public resistance risks
Citizens may selectively accept AI-informed outputs, potentially contributing to errors
Understanding human-AI interactions is increasingly seen as a key challenge for public administration, with significant implications for trust and legitimacy. As governments integrate AI into decision-making, public acceptance of these systems becomes critical. Research suggests algorithmic decisions, despite their promise of neutrality, are not always perceived as fair or legitimate, particularly when they contradict individuals’ expectations or lack transparency (Alon-Barkat and Busuioc, 2022[116]).
Citizens process algorithmic decisions through cognitive perceptions and prior beliefs; they often selectively accept AI-generated recommendations that align with their expectations while resisting those to the contrary (Alon-Barkat and Busuioc, 2022[116]). This selective adherence can reinforce errors in government decision-making, as individuals may be more likely to trust algorithmic predictions when they confirm pre-existing stereotypes or prior knowledge.
Lack of public empowerment and understanding about how government uses AI
There is often a lack of knowledge and understanding among the public regarding AI in general, and more specifically, how governments use it (Arnesen et al., 2024[150]). This can lead to misconceptions and fears about its capabilities and whether governments are using it in a trustworthy manner. This could potentially result in outcomes such as rumours spread online about government’s use of AI in policymaking or in delivering services such as loans, access to justice or social benefits. This can occur because AI’s complexity is often misunderstood, leading to inaccurate assumptions about how decisions are made, the fairness of outcomes and the ability to hold these systems accountable; or because of the limited transparency on how public authorities are using AI. This limited transparency or gap in understanding makes the public more susceptible to scepticism regarding governments’ actions and intentions, resulting in resistance to AI-driven solutions in public services (Valle-Cruz, Garcia-Contreras and Gil-Garcia, 2023[96]).
AI decisions can be perceived as overly rigid or unaccountable, particularly in the absence of human oversight or avenues for redress. Research suggests that when citizens feel disempowered in interactions with automated systems — such as being unable to challenge incorrect AI-based decisions — they experience higher psychological and compliance costs (Alon-Barkat and Busuioc, 2024[72]).
Conversely, higher levels of public empowerment and transparency on use, knowledge and understanding of AI are associated with greater trust in the government's ability to use AI responsibly (Lahusen, Maggetti and Slavkovik, 2024[151]; KPMG, 2025[152]; Alessandro et al., 2021[153]). A well informed citizenry is more likely to recognise the safeguards and ethical considerations implemented by the government, fostering confidence in its competence and integrity in deploying AI technologies.
AI misuse and scandals can undermine trust and contribute to public resistance
The use of AI in government has resulted in several high-profile scandals and cases of real-world harm. This underscores the high reputational costs of AI misuse, with issues happening years ago still resident in public discourse. High-profile failure in one AI system can erode confidence in a broader array of government use cases (Longoni, Cian and Kyung, 2022[154]). Public backlash has even led to people withdrawing from data sharing or hindered the use of existing tools (Ada Lovelace Institute, 2025[155]).
Public backlash is particularly likely when AI errors result in visible harm, such as wrongful denial of benefits or unfair treatment based on skewed predictions. High-profile failures can reinforce public scepticism and fuel concerns AI-driven decisions may undermine procedural justice and democratic accountability (Alon-Barkat and Busuioc, 2022[116]). Data from the OECD AI Incidents Monitor (AIM) shows growth in AI incidents and hazards reported by reputable media sources in recent years (Figure 1.5).17 As of April 2025, 3 816 of the 14 981 incidents listed (25%) were related to “government, security and defence”, illustrating that governments need to ensure they mitigate risks in order to secure citizen trust.
Figure 1.5. AI incidents have generally trended upwards since late 2022
Copy link to Figure 1.5. AI incidents have generally trended upwards since late 2022
Note: An overview of the methodology can be found at https://oecd.ai/incidents-methodology.
Source: OECD AI Incidents Monitor (AIM) – https://oecd.ai/incidents.
Citizens can also exhibit algorithmic aversion, often resisting algorithmic decision-making due to a preference for personal agency and control, even when AI systems outperform humans. This reluctance is reinforced by a greater tolerance for human errors compared to algorithmic mistakes; people tend to lose trust in AI after observing a single failure, whereas they are more forgiving of similar errors made by humans. Algorithmic aversion can be more common in some functions (e.g. public safety) than in others (e.g. general management), which can contribute to different challenges and differing levels of maturity across government functions (see Chapter 3) (Zehnle, Hildebrand and Valenzuela, 2025[156]). Providing users with insight into the reasoning behind an AI’s recommendation or offering limited control to modify its output can significantly improve acceptance, increasing the likelihood of users adopting AI-driven advice (Sunstein, 2023[71]). Beyond contributing to public resistance risks, algorithmic aversion on the part of public servants can hinder governments ability to harness the benefits of AI, as discussed in Chapter 3.
Risks of inaction
The most discussed risks of AI involve the implications of its deployment and adoption. However, a less discussed risk involves delays in leveraging AI to yield real-world positive benefits, including in government and public services. Stanford University’s One Hundred Year Study on Artificial Intelligence (AI100) (2014[21]) noted that “numerous advances in AI can reduce costs, introduce new efficiencies and raise the quality of life... However, the methods have not come into wide use. The sluggish translation of these technologies into the world translates into unnecessary deaths and costs. There is an urgent need to better understand how we can more quickly translate valuable existing AI competencies and advances into real-world practice.” While this study is over a decade old, the conclusion remains the same, especially in government. Beyond missed opportunities, the risk of inaction on AI serves to widen the gap between public sector and private sector capacity (Pahlka, 2024[157]). This not only means governments could fall behind on their ability to use AI but also in their ability to regulate the technology. AI experts suggest that one of the most critical AI risks is the inability of governance mechanisms and institutions to keep up with rapid AI evolutions (OECD, 2024[14]).
Some research highlights that negative hype and fear around AI can contribute to this risk (Laplante et al., 2020[158]). Such research notes that other limitations may include a lack of suitable data, confusion around privacy issues and complexities, and dealing with outdated legacy IT systems. Experts have also attributed the issue of “analysis paralysis” — the fear of getting AI wrong — as a possibility that could paralyse the implementation of even low-risk efforts, foregoing significant benefits (OECD.AI, 2023[159]). This has proven true in government; the latest cross-cutting OECD (2024[13]) work on AI in government found that governments need to better promote and enable the positive aspects of using AI, rather than focusing so disproportionately on the prevention of negative ones. This focus on risks might deter the deployment of high-benefit, low-risk uses of AI to improve public policies and services.
Realising a positive future for AI in government
Copy link to Realising a positive future for AI in governmentAs discussed above, governments are seeking to use AI to increase productivity, with more efficient internal operations and more effective policies and services; responsiveness, through tailored approaches that meet the evolving needs of citizens and businesses; and accountability by enhancing their capacity for oversight. However, there is tremendous untapped potential for governments to use today’s AI technologies and prepare to unlock the opportunities presented by tomorrow’s AI. Even so, a vision of a future where governments successfully develop and adopt trustworthy AI is beginning to emerge.
See AI not as an opportunity to automate the public sector but to reimagine it. We welcome a long-term vision for public service transformation where AI follows rather than leads, one that is grounded in public and professional legitimacy. Public sector leaders should see the rollout of AI as an opportunity to reimagine the state, rather than focusing solely on immediate efficiency gains or automating the status quo. AI should be viewed as a catalyst for fundamental service redesign, placing the citizen at the centre of public service delivery. – Ada Lovelace Institute (2025[155])
This vision shines through in Chapter 5’s discussion of AI in core functions of government. By pursuing AI as part of their digital journeys, governments can transform, rather than just optimise, how they achieve their missions, deliver public value and promote societal well-being.
Future capabilities and uses of AI may present benefits and changes that are currently impossible or even inconceivable. This is also true of its potential risks. The contents of Chapter 5 represent what is currently known about AI in government, and by extension, what can be imagined about the future. Governments and the OECD need to remain vigilant in evaluating how evolving AI technologies and applications may affect public institutions, civil servants, and society at large — ensuring continuous assessment and adaptation in service of the public good.
Through analysis and synthesis of the government functions in Chapter 5 and other research, and for further discussion of AI in government, the OECD has conducted cross-cutting research into the current trends of AI use across government functions and has identified early lessons from these use cases. This research and its findings are discussed in the following chapter.
References
[149] Acemoglu, D. (2024), “The Simple Macroeconomics of AI”, NBER, Working Paper 32487, http://www.nber.org/papers/w32487.
[147] Acemoglu, D. et al. (2022), “Artificial Intelligence and Jobs: Evidence from Online Vacancies”, Journal of Labor Economics, Vol. 40/S1, pp. S293-S340, https://doi.org/10.1086/718327.
[155] Ada Lovelace Institute (2025), Learn fast and build things: Lessons from six years of studying AI in the public sector, Ada Lovelace Institute, https://www.adalovelaceinstitute.org/policy-briefing/public-sector-ai/.
[153] Alessandro, M. et al. (2021), “Transparency and Trust in Government. Evidence from a Survey Experiment”, World Development, Vol. 138, p. 105223, https://doi.org/10.1016/j.worlddev.2020.105223.
[72] Alon-Barkat, S. and M. Busuioc (2024), “Public administration meets artificial intelligence: Towards a meaningful behavioral research agenda on algorithmic decision-making in government”, Journal of Behavioral Public Administration, Vol. 7, https://doi.org/10.30636/jbpa.71.261.
[116] Alon-Barkat, S. and M. Busuioc (2022), “Human–AI Interactions in Public Sector Decision Making: “Automation Bias” and “Selective Adherence” to Algorithmic Advice”, Journal of Public Administration Research and Theory, Vol. 33/1, pp. 153-169, https://doi.org/10.1093/jopart/muac007.
[69] Aonghusa, P. and S. Michie (2020), “Artificial intelligence and behavioral science through the looking glass: Challenges for real-world application.”, Annals of Behavioural Medicine, pp. 942-947, https://doi.org/10.1093/abm/kaaa095.
[127] APA (2023), Worries about artificial intelligence, surveillance at work may be connected to poor mental health, https://www.apa.org/news/press/releases/2023/09/artificial-intelligence-poor-mental-health.
[150] Arnesen, S. et al. (2024), “Knowledge and support for AI in the public sector: a deliberative poll experiment”, AI & SOCIETY, https://doi.org/10.1007/s00146-024-02104-w.
[33] Austin, T. et al. (2024), A snapshot of how public sector leaders feel about generative AI, https://www2.deloitte.com/us/en/insights/industry/public-sector/ai-adoption-in-public-sector.html.
[30] BCG (2024), Where’s the Value in AI?, https://www.bcg.com/publications/2024/wheres-value-in-ai.
[121] Beltran, M., M. Ruiz Mondragon and S. Han (2024), “Comparative Analysis of Generative AI Risks in the Public Sector”, Proceedings of the 25th Annual International Conference on Digital Government Research, pp. 610-617, https://doi.org/10.1145/3657054.3657125.
[26] Bengio, Y. et al. (2025), International AI Safety Report, DSIT 2025/001, 2025, https://www.gov.uk/government/publications/international-ai-safety-report-2025.
[37] Berglind, N., A. Fadia and T. Isherwood (2022), The potential value of AI—and how governments could look to capture it, https://www.mckinsey.com/industries/public-sector/our-insights/the-potential-value-of-ai-and-how-governments-could-look-to-capture-it (accessed on July 2024).
[8] Berryhill, J. et al. (2019), “Hello, World: Artificial intelligence and its use in the public sector”, OECD Working Papers on Public Governance, No. 36, OECD Publishing, Paris, https://doi.org/10.1787/726fd39d-en.
[22] Brizuela, A. et al. (2025), Analysis of the generative AI landscape in the European public sector, European Commission, https://op.europa.eu/s/z4XY.
[43] Brougham, D. and J. Haar (2017), “Smart Technology, Artificial Intelligence, Robotics, and Algorithms (STARA): Employees’ perceptions of our future workplace”, Journal of Management & Organization, Vol. 24/2, pp. 239-257, https://doi.org/10.1017/jmo.2016.55.
[132] Brown, H. et al. (2022), “What Does it Mean for a Language Model to Preserve Privacy?”, 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 2280-2292, https://doi.org/10.1145/3531146.3534642.
[135] Brundage, M. et al. (2018), The Malicious Use of Artificial Intelligence: Forecasting, Precention, and Mitigation, https://arxiv.org/abs/1802.07228.
[67] Brynjolfsson, E., L. Danielle and L. Raymond (2023), Generative AI at Work, National Bureau of Economic Research, https://doi.org/10.3386/w31161.
[112] Byler, D. (2021), In the Camps: China’s High-Tech Penal Colony, Columbia Global Reports, https://www.jstor.org/stable/j.ctv2dzzqqm.
[27] Calvino, F. and L. Fontanelli (2023), “A portrait of AI adopters across countries: Firm characteristics, assets’ complementarities and productivity”, OECD Science, Technology and Industry Working Papers, No. 2023/02, OECD Publishing, Paris, https://doi.org/10.1787/0fb79bb9-en.
[86] Chignard, S. (2013), A brief history of Open Data, https://www.paristechreview.com/2013/03/29/brief-history-open-data.
[111] Clarke, S. and J. Whittlestone (2022), A Survey of the Potential Long-term Impacts of AI - How AI Could Lead to Long-term Changes in Science, Cooperation, Power, Epistemics and Values, https://dl.acm.org/doi/abs/10.1145/3514094.3534131.
[18] Cognitus, A. (2024), 9 Agentic AI Examples: Real-World Use Cases and Applications, https://integrail.ai/blog/agentic-ai-examples.
[46] Corvalán, J. and E. Le Fevre Cervini (2020), Prometea experience. Using AI to optimize public institutions, https://ceridap.eu/prometea-experience-using-ai-to-optimize-public-institutions.
[160] DataHeroes (2023), Noise in Machine Learning, https://dataheroes.ai/glossary/noise-in-machine-learning.
[97] Davies, H., S. Nutley and I. Walter (2008), “Why ‘knowledge transfer’ is misconceived for applied social research”, Journal of Health Services Research & Policy, Vol. 13/3, pp. 188-190, https://doi.org/10.1258/jhsrp.2008.008055.
[48] Dell’Acqua, F. et al. (2025), The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise, Elsevier BV, https://doi.org/10.2139/ssrn.5188231.
[52] Dell’Acqua, F. et al. (2023), “Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality”, SSRN Electronic Journal, https://doi.org/10.2139/ssrn.4573321.
[64] Desouza, K. and B. Jacob (2014), “Big Data in the Public Sector: Lessons for Practitioners and Scholars”, Administration & Society, Vol. 49/7, pp. 1043-1064, https://doi.org/10.1177/0095399714555751.
[144] Dieterle, E., C. Dede and M. Walker (2022), “The cyclical ethical effects of using artificial intelligence in education”, AI & SOCIETY, Vol. 39/2, pp. 633-643, https://doi.org/10.1007/s00146-022-01497-w.
[68] Du, M. (2023), “Machine vs. human, who makes a better judgment on innovation? Take GPT-4 for example”, Frontiers in Artificial Intelligence, Vol. 6, https://doi.org/10.3389/frai.2023.1206516.
[126] EC (2025), Study exploring the context, challenges, opportunities, and trends in algorithmic management, European Commission, https://employment-social-affairs.ec.europa.eu/study-exploring-context-challenges-opportunities-and-trends-algorithmic-management_en.
[41] EC (2024), What factors influence perceived artificial intelligence adoption by public managers, https://publications.jrc.ec.europa.eu/repository/handle/JRC138684.
[137] Eshelby, L. (2025), Addressing insider threats in the public sector, https://www.openaccessgovernment.org/addressing-insider-threats-in-the-public-sector/187801/.
[94] EU (2024), Regulation (EU) 2024/1689 laying down harmonised rules on artificial, European Union, https://eur-lex.europa.eu/eli/reg/2024/1689/oj.
[107] Fazlioglu, M. (2024), Consumer Perspectives of Privacy and Artificial Intelligence, https://iapp.org/resources/article/consumer-perspectives-of-privacy-and-ai/.
[101] Feldstein, S. (2022), AI & Big Data Global Surveillance Index (2022 updated), https://doi.org/10.17632/gjhf5y4xjp.4.
[24] Filippucci, F., P. Gal and M. Schief (2024), “Miracle or Myth? Assessing the macroeconomic productivity gains from Artificial Intelligence”, OECD Artificial Intelligence Papers, No. 29, OECD Publishing, Paris, https://doi.org/10.1787/b524a072-en.
[73] Fitkov-Norris, E. and N. Kocheva (2025), “Leveraging AI for strategic foresight: Unveiling future horizons”, in Improving and Enhancing Scenario Planning, Edward Elgar Publishing, https://doi.org/10.4337/9781035310586.00023.
[54] Flavián, C. and L. Casaló (2021), “Artificial intelligence in services: current trends, benefits and challenges”, The Service Industries Journal, Vol. 41/13-14, pp. 853–859, https://doi.org/10.1080/02642069.2021.1989177.
[23] Gartner (2024), Gartner 2024 Hype Cycle for Emerging Technologies Highlights Developer Productivity, Total Experience, AI and Security, https://www.gartner.com/en/newsroom/press-releases/2024-08-21-gartner-2024-hype-cycle-for-emerging-technologies-highlights-developer-productivity-total-experience-ai-and-security.
[122] Gerlich, M. (2025), “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking”, Societies, Vol. 15/1, p. 6, https://doi.org/10.3390/soc15010006.
[56] Giest, S. (2017), “Big data for policymaking: fad or fasttrack?”, Policy Sciences, Vol. 50/3, pp. 367-382, https://doi.org/10.1007/s11077-017-9293-1.
[128] Gmyrek, P., J. Berg and D. Bescond (2023), Generative AI and Jobs: A global analysis of potential effects on job quantity and quality, ILO, https://doi.org/10.54394/FHEM8239.
[117] Goldman, S. (2023), OpenAI has grand ‘plans’ for AGI. Here’s another way to read its manifesto, https://venturebeat.com/ai/openai-has-grand-plans-for-agi-heres-another-way-to-read-its-manifesto-the-ai-beat/.
[136] Gopireddy, R. (2024), Securing AI Systems: Protecting Against Adversarial Attacks and Data Poisoning, https://jsaer.com/download/vol-11-iss-5-2024/JSAER2024-11-5-276-281.pdf.
[95] Government of Korea (2024), A New Chapter in the Age of AI: Basic Act on AI Passed at the National Assembly‘s Plenary Session, https://www.msit.go.kr/eng/bbs/view.do?bbsSeqNo=42&mId=4&mPid=2&nttSeqNo=1071.
[66] Green, B. (2022), “The flaws of policies requiring human oversight of government algorithms”, Computer Law & Security Review, Vol. 45, p. 105681, https://doi.org/10.1016/j.clsr.2022.105681.
[161] Grzegorzek, J. (2024), Digital Taylorism: The Use of Data to Monitor Employees, https://medium.com/%40JerryGrzegorzek/digital-taylorism-the-use-of-data-to-monitor-employees-582b331d970a.
[75] Gupta, T. and S. Roy (2024), “Applications of Artificial Intelligence in Disaster Management”, Proceedings of the 2024 10th International Conference on Computing and Artificial Intelligence, pp. 313-318, https://doi.org/10.1145/3669754.3669802.
[28] Hampole, M. et al. (2025), Artificial Intelligence and the Labor Market, National Bureau of Economic Research, Cambridge, MA, https://doi.org/10.3386/w33509.
[62] Höchtl, J., P. Parycek and R. Schöllhammer (2015), “Big data in the policy cycle: Policy decision making in the digital era”, Journal of Organizational Computing and Electronic Commerce, Vol. 26/1-2, pp. 147-169, https://doi.org/10.1080/10919392.2015.1125187.
[89] Ho, D. (2023), Opportunities and Risks of Artificial Intelligence in the Public Sector, https://law.stanford.edu/2023/05/25/opportunities-and-risks-of-artificial-intelligence-in-the-public-sector/.
[115] Horowitz, M. (2023), Bending the Automation Bias Curve: A Study of Human and AI-based Decision Making in National Security Contexts, https://arxiv.org/abs/2306.16507.
[21] Horvitz, E. (2014), Reflections and Framing: One-Hundred Year Study on Artificial Intelligence: Reflections and Framing, https://ai100.stanford.edu/reflections-and-framing.
[53] Huang, M. and R. Rust (2021), “Engaged to a Robot? The Role of AI in Service”, Journal of Service Research, Vol. 24/1, pp. 30–41, https://doi.org/10.1177/1094670520902266.
[131] ICO (2023), Joint statement on data scraping and the protection of privacy, https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/08/joint-statement-on-data-scraping-and-data-protection/.
[139] ITU (2023), Measuring digital development: Facts and Figures 2023, https://www.itu.int/en/ITU-D/Statistics/Pages/facts/default.aspx (accessed on July 2024).
[76] Jarrahi, M. et al. (2023), “Artificial intelligence and knowledge management: A partnership between human and AI”, Business Horizons, Vol. 66/1, pp. 87-99, https://doi.org/10.1016/j.bushor.2022.03.002.
[87] Jeevanandam, N. (2024), AI in agriculture in 2025: Transforming Indian farms for a sustainable future, https://indiaai.gov.in/article/ai-in-agriculture-in-2025-transforming-indian-farms-for-a-sustainable-future.
[47] Jones, C. (2022), “The Past and Future of Economic Growth: A Semi-Endogenous Perspective”, Annual Review of Economics, Vol. 14/1, pp. 125-152, https://doi.org/10.1146/annurev-economics-080521-012458.
[120] Klingbeil, A., C. Grützner and P. Schrec (2024), Trust and reliance on AI — An experimental study on the extent and costs of overreliance on AI, https://doi.org/10.1016/j.chb.2024.108352 (accessed on September 2024).
[61] Kolkman, D. (2020), “The usefulness of algorithmic models in policy making”, Government Information Quarterly, Vol. 37/3, p. 101488, https://doi.org/10.1016/j.giq.2020.101488.
[57] Kopponen, A. et al. (2024), “Personalised public services powered by AI: the citizen digital twin approach”, in Research Handbook on Public Management and Artificial Intelligence, Edward Elgar Publishing, https://doi.org/10.4337/9781802207347.00020.
[152] KPMG (2025), Trust in artificial intelligence: global insights 2025, https://kpmg.com/au/en/home/insights/2025/04/trust-in-ai-global-insights-2025.html.
[151] Lahusen, C., M. Maggetti and M. Slavkovik (2024), “Trust, trustworthiness and AI governance”, Scientific Reports, Vol. 14/1, https://doi.org/10.1038/s41598-024-71761-0.
[158] Laplante, P. et al. (2020), “Artificial Intelligence and Critical Systems: From Hype to Reality”, Computer, Vol. 53/11, pp. 45-52, https://doi.org/10.1109/mc.2020.3006177.
[141] Larsson, K. (2021), “Digitization or equality: When government automation covers some, but not all citizens”, Government Information Quarterly, Vol. 38/1, p. 101547, https://doi.org/10.1016/j.giq.2020.101547.
[114] Lima, G. et al. (2022), “The Conflict Between Explainable and Accountable Decision-Making Algorithms”, 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 2103-2113, https://doi.org/10.1145/3531146.3534628.
[154] Longoni, C., L. Cian and E. Kyung (2022), “Algorithmic Transference: People Overgeneralize Failures of AI in the Government”, Journal of Marketing Research, Vol. 60/1, pp. 170-188, https://doi.org/10.1177/00222437221110139.
[17] Lorenz, P., K. Perset and J. Berryhill (2023), “Initial policy considerations for generative artificial intelligence”, OECD Artificial Intelligence Papers, No. 1, OECD Publishing, Paris, https://doi.org/10.1787/fae2d1e6-en.
[70] Ludwig, J. and S. Mullainathan (2022), “Algorithmic Behavioral Science: Machine Learning as a Tool for Scientific Discovery”, SSRN Electronic Journal, https://doi.org/10.2139/ssrn.4164272.
[32] Manning, B., K. Zhu and J. Horton (2024), Automated Social Science: Language Models as Scientist and Subjects, https://arxiv.org/abs/2404.11794.
[138] Mariani, J., P. Kishnani and A. Alibage (2025), Government’s less trodden path to scaling generative AI, https://www2.deloitte.com/us/en/insights/industry/public-sector/government-faces-challenges-with-generative-ai-adoption.html.
[35] Mellouli, S., M. Janssen and A. Ojo (2024), “Introduction to the Issue on Artificial Intelligence in the Public Sector: Risks and Benefits of AI for Governments”, Digital Government: Research and Practice, Vol. 5/1, pp. 1-6, https://doi.org/10.1145/3636550.
[123] Meng, J. (2024), AI emerges as the frontier in behavioral science, https://doi.org/10.1073/pnas.2401336121.
[34] Mergel, I. et al. (2023), “Implementing AI in the public sector”, Public Management Review, pp. 1-14, https://doi.org/10.1080/14719037.2023.2231950.
[148] Milanez, A. (2023), “The impact of AI on the workplace: Evidence from OECD case studies of AI implementation”, OECD Social, Employment and Migration Working Papers, No. 289, OECD Publishing, Paris, https://doi.org/10.1787/2247ce58-en.
[124] Milanez, A., A. Lemmens and C. Ruggiu (2025), “Algorithmic management in the workplace: New evidence from an OECD employer survey”, OECD Artificial Intelligence Papers, No. 31, OECD Publishing, Paris, https://doi.org/10.1787/287c13c4-en.
[58] Mills, S., S. Costa and C. Sunstein (2023), “AI, Behavioural Science, and Consumer Welfare”, J Consum Policy, Vol. 46, pp. 387–400, https://doi.org/10.1007/s10603-023-09547-6.
[102] Nikiforova, A. et al. (2023), “Towards High-Value Datasets Determination for Data-Driven Development: A Systematic Literature Review”, in Lecture Notes in Computer Science, Electronic Government, Springer Nature Switzerland, Cham, https://doi.org/10.1007/978-3-031-41138-0_14.
[20] NIST (2025), Technical Blog: Strengthening AI Agent Hijacking Evaluations, https://www.nist.gov/news-events/news/2025/01/technical-blog-strengthening-ai-agent-hijacking-evaluations.
[49] Noy, S. and W. Zhang (2023), “Experimental evidence on the productivity effects of generative artificial intelligence”, Science, Vol. 381/6654, pp. 187-192, https://doi.org/10.1126/science.adh2586.
[90] OECD (2025), How Innovation Ecosystems Foster Citizen Participation Using Emerging Technologies in Portugal, Spain and the Netherlands, OECD Public Governance Reviews, OECD Publishing, Paris, https://doi.org/10.1787/2cb37a30-en.
[79] OECD (2025), Sharing trustworthy AI models with privacy-enhancing technologies, OECD Publishing, https://doi.org/10.1787/a266160b-en.
[162] OECD (2025), “Towards a common reporting framework for AI incidents”, OECD Artificial Intelligence Papers, No. 34, OECD Publishing, Paris, https://doi.org/10.1787/f326d4ac-en.
[5] OECD (2024), “2023 OECD Digital Government Index: Results and key findings”, OECD Public Governance Policy Papers, No. 44, OECD Publishing, Paris, https://doi.org/10.1787/1a89ed5e-en.
[129] OECD (2024), “AI, data governance and privacy: Synergies and areas of international co-operation”, OECD Artificial Intelligence Papers, No. 22, OECD Publishing, Paris, https://doi.org/10.1787/2476b1a4-en.
[14] OECD (2024), “Assessing potential future artificial intelligence risks, benefits and policy imperatives”, OECD Artificial Intelligence Papers, No. 27, OECD Publishing, Paris, https://doi.org/10.1787/3f4e3dfb-en.
[163] OECD (2024), “Defining AI incidents and related terms”, OECD Artificial Intelligence Papers, No. 16, OECD Publishing, Paris, https://doi.org/10.1787/d1a8d965-en.
[7] OECD (2024), “Explanatory memorandum on the updated OECD definition of an AI system”, OECD Artificial Intelligence Papers, No. 8, OECD Publishing, Paris, https://doi.org/10.1787/623da898-en.
[92] OECD (2024), Facts not Fakes: Tackling Disinformation, Strengthening Information Integrity, OECD Publishing, Paris, https://doi.org/10.1787/d909ff7a-en.
[105] OECD (2024), Global Trends in Government Innovation 2024: Fostering Human-Centred Public Services, OECD Public Governance Reviews, OECD Publishing, Paris, https://doi.org/10.1787/c1bc19c3-en.
[13] OECD (2024), “Governing with Artificial Intelligence: Are governments ready?”, OECD Artificial Intelligence Papers, No. 20, OECD Publishing, Paris, https://doi.org/10.1787/26324bc2-en.
[59] OECD (2024), Modernising Access to Social Protection: Strategies, Technologies and Data Advances in OECD Countries, OECD Publishing, Paris, https://doi.org/10.1787/af31746d-en.
[11] OECD (2024), OECD Artificial Intelligence Review of Germany, OECD Publishing, Paris, https://doi.org/10.1787/609808d6-en.
[15] OECD (2024), OECD Digital Economy Outlook 2024 (Volume 1): Embracing the Technology Frontier, OECD Publishing, Paris, https://doi.org/10.1787/a1689dc5-en.
[4] OECD (2024), OECD Survey on Drivers of Trust in Public Institutions – 2024 Results: Building Trust in a Complex Policy Environment, OECD Publishing, Paris, https://doi.org/10.1787/9a20554b-en.
[6] OECD (2024), Recommendation of the Council on Artificial Intelligence, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.
[31] OECD (2023), Artificial Intelligence in Science: Challenges, Opportunities and the Future of Research, OECD Publishing, Paris, https://doi.org/10.1787/a8d820bd-en.
[80] OECD (2023), “Emerging privacy-enhancing technologies: Current regulatory and policy approaches”, OECD Digital Economy Papers, No. 351, OECD Publishing, Paris, https://doi.org/10.1787/bf121be4-en.
[38] OECD (2023), Global Trends in Government Innovation 2023, OECD Public Governance Reviews, OECD Publishing, Paris, https://doi.org/10.1787/0655b570-en.
[25] OECD (2023), OECD Employment Outlook 2023: Artificial Intelligence and the Labour Market, OECD Publishing, Paris, https://doi.org/10.1787/08785bba-en.
[88] OECD (2023), “The state of implementation of the OECD AI Principles four years on”, OECD Artificial Intelligence Papers, No. 3, OECD Publishing, Paris, https://doi.org/10.1787/835641c9-en.
[109] OECD (2022), Emerging future AI risks, OECD, https://wp.oecd.ai/app/uploads/2023/03/OECD-Foresight-workshop-notes-1.pdf.
[93] OECD (2022), “Measuring the environmental impacts of artificial intelligence compute and applications: The AI footprint”, OECD Digital Economy Papers, No. 341, OECD Publishing, Paris, https://doi.org/10.1787/7babf571-en.
[40] OECD (2022), “OECD Framework for the Classification of AI systems”, OECD Digital Economy Papers, No. 323, OECD Publishing, Paris, https://doi.org/10.1787/cb6d9eca-en.
[83] OECD (2021), Data-Driven, Information-Enabled Regulatory Delivery, OECD Publishing, Paris, https://doi.org/10.1787/8f99ec8c-en.
[100] OECD (2020), “Dealing with digital security risk during the Coronavirus (COVID-19) crisis”, OECD Policy Responses to Coronavirus (COVID-19), OECD Publishing, Paris, https://doi.org/10.1787/c9d3fe8e-en.
[2] OECD (2020), Embracing Innovation in Government - Global Trends 2020: Innovative responces to the COVID-19 crisis, OECD Publishing, https://trends.oecd-opsi.org/trend-reports/innovative-covid-19-solutions/.
[3] OECD (2020), “The OECD Digital Government Policy Framework: Six dimensions of a Digital Government”, OECD Public Governance Policy Papers, No. 02, OECD Publishing, Paris, https://doi.org/10.1787/f64fed2a-en.
[65] OECD (2019), The Path to Becoming a Data-Driven Public Sector, OECD Digital Government Studies, OECD Publishing, Paris, https://doi.org/10.1787/059814a7-en.
[82] OECD (2018), OECD Regulatory Enforcement and Inspections Toolkit, OECD Publishing, Paris, https://doi.org/10.1787/9789264303959-en.
[84] OECD (2018), Open Government Data Report: Enhancing Policy Maturity for Sustainable Impact, OECD Digital Government Studies, OECD Publishing, Paris, https://doi.org/10.1787/9789264305847-en.
[103] OECD (2017), Recommendation of the Council on Health Data Governance, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0433.
[1] OECD (2014), Recommendation of the Council on Digital Government Strategies, OECD Publishing, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0406.
[104] OECD (1980), Recommendation of the Council concerning Guidelines Governing the Protection of Privacy and Transborder Flows of Personal Data, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0188.
[130] OECD (forthcoming), Mapping Relevant Data Collection Mechanisms for AI Training, OECD Publishing.
[16] OECD.AI (2023), OECD Expert Group on AI Futures - Meeting 2 (18th & 20th SEptember 2023), https://oecd.ai/en/site/ai-futures/meeting-summaries.
[159] OECD.AI (2023), What do you see as the as the most significant potential benefits and risks of AI 10+ years from now?, https://oecd.ai/en/network-of-experts/ai-futures/discussions/future-benefits-risks.
[10] OECD/CAF (2022), The Strategic and Responsible Use of Artificial Intelligence in the Public Sector of Latin America and the Caribbean, OECD Public Governance Reviews, OECD Publishing, Paris, https://doi.org/10.1787/1f334543-en.
[12] OECD/UNESCO (2024), G7 Toolkit for Artificial Intelligence in the Public Sector, OECD Publishing, Paris, https://doi.org/10.1787/421c1244-en.
[118] Olson, P. (2023), Don’t Go Down That AI Longtermism Rabbit Hole, https://www.bloomberg.com/opinion/articles/2023-05-19/ai-longtermism-alarmists-are-dragging-us-all-down-existential-rabbit-hole.
[157] Pahlka, J. (2024), AI meets the cascade of rigidity, https://www.niskanencenter.org/ai-meets-the-cascade-of-rigidity/.
[119] Passi, S. and M. Vorvoreanu (2022), Overreliance on AI: Literature review, https://www.microsoft.com/en-us/research/publication/overreliance-on-ai-literature-review/ (accessed on September 2024).
[146] Peixoto, T., O. Canuto and L. Jordan (2024), AI and the Future of Government: Unexpected Effects and Critical Challenges, https://www.policycenter.ma/publications/ai-and-future-government-unexpected-effects-and-critical-challenges.
[42] Pencheva, I., M. Esteve and S. Mikhaylov (2018), “Big Data and AI – A transformational shift for government: So, what next for research?”, Public Policy and Administration, Vol. 35/1, pp. 24-44, https://doi.org/10.1177/0952076718780537.
[50] Peng, S. et al. (2023), The Impact of AI on Developer Productivity: Evidence from GitHub Copilot, https://arxiv.org/abs/2302.06590.
[143] Perry, A. and N. Turner Lee (2019), AI is coming to schools, and if we’re not careful, so will its biases, https://www.brookings.edu/articles/ai-is-coming-to-schools-and-if-were-not-careful-so-will-its-biases/.
[19] Purdy, M. (2024), What Is Agentic AI, and How Will It Change Work?, https://hbr.org/2024/12/what-is-agentic-ai-and-how-will-it-change-work.
[106] Rainie, L. and J. Anderson (2024), Experts Imagine the Impact of Artificial Intelligence by 2040, https://imaginingthedigitalfuture.org/wp-content/uploads/2024/02/AI2040-FINAL-White-Paper-2-2.29.24.pdf.
[145] Röttger, P. et al. (2024), “SafetyPrompts: a Systematic Review of Open Datasets for Evaluating and Improving Large Language Model Safety”, arXiv.org, https://doi.org/10.48550/arXiv.2404.05399.
[113] Russell, S. (2019), Human Compatible: Artificial Intelligence and the Problem of Control, Viking.
[99] Saheb, T. (2022), “Ethically contentious aspects of artificial intelligence surveillance: a social science perspective”, AI and Ethics, Vol. 3/2, pp. 369-379, https://doi.org/10.1007/s43681-022-00196-y.
[134] Salem, A. et al. (2018), ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models, https://arxiv.org/abs/1806.01246.
[39] Santiso, C. (2023), Public Governance in the Age of Artificial Intelligence, https://www.chandlerinstitute.org/governancematters/public-governance-in-the-age-of-artificial-intelligence.
[36] Santos, R. et al. (2024), “The use of AI in government and its risks: lessons from the private sector”, Transforming Government: People, Process and Policy, https://doi.org/10.1108/tg-02-2024-0038.
[78] Sanzogni, L., G. Guzman and P. Busch (2017), “Artificial intelligence and knowledge management: questioning the tacit dimension”, Prometheus, Vol. 35, pp. 37-56, https://doi.org/10.1080/08109028.2017.1364547.
[45] Sapci, A. and H. Sapci (2019), “Innovative Assisted Living Tools, Remote Monitoring Technologies, Artificial Intelligence-Driven Solutions, and Robotic Systems for Aging Societies: Systematic Review”, JMIR Aging, Vol. 2/2, p. e15429, https://doi.org/10.2196/15429.
[98] Shane, J. (2019), You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place, https://www.hachettebookgroup.com/titles/janelle-shane/you-look-like-a-thing-and-i-love-you/9780316525220/.
[140] Singh, V. and J. Chobotaru (2022), “Digital Divide: Barriers to Accessing Online Government Services in Canada”, Administrative Sciences, Vol. 12/3, p. 112, https://doi.org/10.3390/admsci12030112.
[133] Staab, R. et al. (2023), Beyond Memorization: Violating Privacy Via Inference with Large Language, https://arxiv.org/abs/2310.07298.
[71] Sunstein, C. (2023), “The use of algorithms in society”, The Review of Austrian Economics, Vol. 37/4, pp. 399-420, https://doi.org/10.1007/s11138-023-00625-z.
[74] Sun, W., P. Bocchini and B. Davison (2020), “Applications of artificial intelligence for disaster management”, Natural Hazards, Vol. 103/3, pp. 2631-2689, https://doi.org/10.1007/s11069-020-04124-3.
[110] Tegmark, M. (2017), Life 3.0: Being Human in the Age of Artificial Intelligence, Penguin, https://mitpressbookstore.mit.edu/book/9781101970317.
[51] The Economist (2025), How AI will divide the best from the rest, https://www.economist.com/finance-and-economics/2025/02/13/how-ai-will-divide-the-best-from-the-rest.
[91] Tse, T. and S. Karimov (2022), Decision-making risks slow down the use of artificial intelligence in business, https://blogs.lse.ac.uk/businessreview/2022/05/18/decision-making-risks-slow-down-the-use-of-artificial-intelligence-in-business-1/.
[9] Ubaldi, B. et al. (2019), “State of the art in the use of emerging technologies in the public sector”, OECD Working Papers on Public Governance, No. 31, OECD Publishing, Paris, https://doi.org/10.1787/932780bc-en.
[125] UC Berkeley (2021), Positive AI Economic Futures, World Economic Forum, https://www.weforum.org/reports/positive-ai-economic-futures.
[108] UN (2024), Governing AI for Humanity, https://www.un.org/sites/un2.un.org/files/governing_ai_for_humanity_final_report_en.pdf.
[55] UN (2022), E-Government Survey 2022: The Future of Digital Government, United Nations, https://desapublications.un.org/sites/default/files/publications/2022-09/Web%20version%20E-Government%202022.pdf.
[142] UNESCO (2019), Artificial Intelligence in Education: Challenges and Opportunities for Sustainable Development, https://www.gcedclearinghouse.org/sites/default/files/resources/190175eng.pdf.
[81] US GAO (2024), Fraud Risk Management: 2018-2022 Data Show Federal Government Loses an Estimated $233 Billion to $521 Billion Annually to Fraud, Based on Various Risk Environments, https://www.gao.gov/products/gao-24-105833.
[85] USGS (2024), Landsat’s Economic Value increased to $25.6 Billion in 2023, https://www.usgs.gov/news/featured-story/landsats-economic-value-increases-256-billion-2023.
[60] Valle-Cruz, D. et al. (2020), “Assessing the public policy-cycle framework in the age of artificial intelligence: From agenda-setting to policy evaluation”, Government Information Quarterly, Vol. 37/4, p. 101509, https://doi.org/10.1016/j.giq.2020.101509.
[96] Valle-Cruz, D., R. Garcia-Contreras and R. Gil-Garcia (2023), “Exploring the negative impacts of artificial intelligence in government: the dark side of intelligent algorithms and cognitive machines”, International Review of Administrative Sciences, https://doi.org/10.1177/002085232311870.
[29] Williams, C. (2025), There will be no immediate productivity boost from AI, https://www.economist.com/the-world-ahead/2024/11/20/there-will-be-no-immediate-productivity-boost-from-ai.
[63] Wirjo, A. et al. (2022), Artificial Intelligence in Economic Policymaking, Policy Brief No. 52, https://www.apec.org/docs/default-source/publications/2022/11/artificial-intelligence-in-economic-policymaking/222_psu_artificial-intelligence-in-economic-policymaking.pdf.
[44] Xu, G., M. Xue and J. Zhao (2023), “The Relationship of Artificial Intelligence Opportunity Perception and Employee Workplace Well-Being: A Moderated Mediation Model”, International Journal of Environmental Research and Public Health, Vol. 20/3, p. 1974, https://doi.org/10.3390/ijerph20031974.
[156] Zehnle, M., C. Hildebrand and A. Valenzuela (2025), “Not all AI is created equal: A meta-analysis revealing drivers of AI resistance across markets, methods, and time”, International Journal of Research in Marketing, https://doi.org/10.1016/j.ijresmar.2025.02.005.
[77] Zhang, Z., L. Wang and C. Lee (2023), “Recent Advances in Artificial Intelligence Sensors”, Advanced Sensor Research, Vol. 2/8, p. 2200072, https://doi.org/10.1002/adsr.202200072.
Notes
Copy link to Notes← 2. The United States alone has catalogued more than 2 000 use cases in civilian federal government agencies (https://github.com/ombegov/2024-Federal-AI-Use-Case-Inventory). Similarly, the European Commission has catalogue more than 1 300 (https://interoperable-europe.ec.europa.eu/collection/public-sector-tech-watch/cases). More than 700 use cases have been catalogued in Latin American and Caribbean (LAC) governments (https://sistemaspublicos.tech/sistemas-de-ia-en-america-latina).
← 3. See https://trends.oecd-opsi.org, https://cross-border.oecd-opsi.org, (OECD, 2023[38]), and (OECD, 2024[105]).
← 4. See https://oecd-opsi.org/innovation-tag/artificial-intelligence-ai and https://oecd.ai/en/dashboards/policy-initiatives?orderBy=startYearDesc&page=1&terms=&initiativeTypeIds=123.
← 5. These findings are based on a survey of 1 000 senior executives from 20 sectors across 59 countries in Asia, Europe and North America.
← 6. Unless otherwise cited, the sections below on the benefits of AI in the public sector are based on analysis of the functions of government and use cases presented in Chapter 5 of this report.
← 7. “Noise” is also a term used in ML to mean “random or unpredictable fluctuations in data that disrupt the ability to identify target patterns or relationships. The result is decreased accuracy or reliability of a model’s predictions or output” (DataHeroes, 2023[160]). This is not the concept of noise discussed in this chapter, which focuses on factors that can influence humans.
← 8. For more information on this topic, see https://www.oecd.org/en/topics/behavioural-science and https://oecd-opsi.org/work-areas/behavioural-insights.
← 10. See the MIT AI Risk Repository, a living database of over 1 000 AI risks (https://airisk.mit.edu). The OECD AI Incidents Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems (https://oecd.ai/incidents).
← 12. Overfitting refers to cases where the algorithm is too specific to the extent that it captures and focuses too much on noise and anomalies (Berryhill et al., 2019[8]). During the training phase, an overfitting model may achieve a high level of accuracy and problems may go unnoticed. However, once the trained model is exposed to new data, accuracy can drop severely.
← 13. The index does not distinguish between legitimate and illegitimate uses of AI surveillance techniques. Rather, the purpose of the research is to show how new surveillance capabilities are transforming governments’ ability to monitor and track individuals or groups.
← 14. Digital Taylorism refers to the modern adaptation of Frederick Winslow Taylor's principles of scientific management, utilising digital technologies to monitor and control employee activities with the goal of enhancing efficiency and productivity. This approach involves breaking down complex tasks into simpler components, standardising workflows, and employing data-driven methods to oversee and evaluate worker performance. While it aims to optimise organisational operations, critics argue that it may lead to decreased worker autonomy and increased surveillance in the workplace (Grzegorzek, 2024[161]).
← 16. The OECD Directorate for Science, Technology and Innovation (STI) has a dedicated line of work on digital security. See https://www.oecd.org/en/topics/policy-issues/digital-security for more information.
← 17. An AI incident is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems directly or indirectly leads to any of the following harms: (a) injury or harm to the health of a person or groups of people; (b) disruption of the management and operation of critical infrastructure; (c) violations of human rights or a breach of obligations under the applicable law intended to protect fundamental, labour and intellectual property rights; or (d) harm to property, communities or the environment. An AI hazard is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems could plausibly lead to an AI incident, i.e. any of the following harms: (a) injury or harm to the health of a person or groups of people; (b) disruption of the management and operation of critical infrastructure; (c) violations to human rights or a breach of obligations under the applicable law intended to protect fundamental, labour and intellectual property rights; or (d) harm to property, communities or the environment. For more information, see https://oecd.ai/incidents-methodology and (OECD, 2025[162]; 2024[163]).