Artificial intelligence (AI) is one of the most transformative forces of the 21st century, and it is becoming an integral part of digital government worldwide. Governments’ use of AI can facilitate automated and tailored internal processes and public services, foster better decision making and forecasting, improve fraud detection and improve public servants’ job quality and learning – all with tangible impacts. For example, The Alan Turing Institute estimates that the AI could automate 84% of repetitive public service transactions in the United Kingdom, saving the equivalent of 1 200 person-years of work annually. Despite its promise, government AI use trails the private sector.
Governing with Artificial Intelligence
Executive summary
Copy link to Executive summaryKey findings: How AI can serve citizens
Copy link to Key findings: How AI can serve citizensThe OECD has conducted in-depth research of AI in 11 core functions of government across 200 use cases. The results suggest that AI is most prevalent in terms of total use cases in public service and justice functions and civic participation, with relatively less use seen in policy evaluation, tax administration and civil service reform. In between are public procurement, financial management, fighting corruption and promoting public integrity, and regulatory design and delivery. Possible explanations for this distribution include that some functions encompass a wider variety of uses (public services) while others are more narrow (civil service reform, tax administration). Also, some face more regulatory constraints (e.g. tax administration, given rules on using tax data), while some face fewer implementation challenges and can mature faster (civic participation). In some functions, such as justice administration, public demands and growing transactions backlogs may precipitate AI adoption as an opportunity to tackle urgent challenges.
AI’s use is more prevalent in internal operations and public service delivery, but less prominent in government oversight. Less use is also seen in policymaking, consistent with previous OECD analysis. Use cases often rely on classic rules-based approaches or established machine learning (ML) techniques, with generative AI (GenAI), including large language models (LLMs), being less common. In terms of benefits, the largest share of cases seeks to promote automated, streamlined and tailored processes and services; followed by better decision making and forecasting; and enhanced accountability and anomaly detection. A few cases seek to unlock new opportunities for external stakeholders (e.g. citizens, businesses) through access to government-provided AI systems, but further efforts may be warranted.
Risks for AI use in government
Copy link to Risks for AI use in governmentThere is no such thing as risk-free AI adoption. Unlocking AI’s benefits requires mitigating its risks. Biased algorithms can result in adverse outcomes; AI misuse can infringe on or prevent the free exercise of human rights; insufficient transparency, explainability and public understanding of AI can erode accountability and cause public resistance; and the over-reliance on AI can widen digital divides and allow systemic errors to propagate, weakening citizen trust in government. Such risks may be amplified in countries lacking mechanisms to guarantee the exercise, protection and promotion of human rights, or result from AI misuse by individual public servants. Public service workforce displacement could also occur if governments seek to replace rather than augment civil servants’ capabilities.
Governments’ failure to leverage AI also represents risk, resulting in missed opportunities to yield benefits and widening the gap between public and private sector capacities. They will need to adopt AI if they want to meet increasing citizen demands and strengthen trust in government. Ignoring AI transformation or waiting for all unknowns to be resolved relegates government to being a technology-taker rather and an option-shaper, incurring significant costs and disadvantages. If governments do not bolster internal AI capacities soon, they may struggle to ever catch up.
Governments also face AI implementation challenges
Copy link to Governments also face AI implementation challengesChallenges in scaling up successful AI applications means government AI initiatives often remain in pilot phases. Skills gaps and difficulties in obtaining and sharing quality data are encountered across government functions. Moreover, although strategies for AI in government are common, a lack of concrete guidance hinders their transformation into practice. These factors compound risk aversion, hindering governments’ ability to innovate with AI. Furthermore, insufficient monitoring and evaluation mechanisms restrict their ability to gauge progress, detect risks and demonstrate return on investment. Financial costs are also a common challenge.
Some challenges are more prevalent in some functions than others. For instance, tax administration faces complex laws and rules around tax processes and data, whereas public procurement struggles with a lack of established rules around AI. Finally, the use of AI in functions such as public financial management is constrained by outdated legacy technology infrastructure unsuitable for AI development or use.
How governments can ensure their use of AI is trustworthy
Copy link to How governments can ensure their use of AI is trustworthyTo reap the benefits of AI in government while mitigating its risks and overcoming implementation challenges, governments need to put in place:
Enablers to facilitate trustworthy adoption, including governance, data, digital infrastructure, skills, financial investments, agile procurement processes and capacities to partner with non-governmental actors.
Guardrails to guide the use of AI, including rules and policies, guidance and frameworks, transparency and accountability mechanisms that span the AI system lifecycle, and oversight and advisory bodies to guide and evaluate efforts.
Engagement approaches to shape user-centred and responsive approaches, including mechanisms to engage with key stakeholders, including the public, civil society and businesses.
More action is needed to invest in and adopt trustworthy AI in government, but existing approaches provide lessons and inspiration
Copy link to More action is needed to invest in and adopt trustworthy AI in government, but existing approaches provide lessons and inspirationTo the extent possible, the OECD encourages governments to prioritise high-benefit, low-risk applications of AI, especially when building an initial level of maturity. Most, however, lack the processes for holistic measurement of potential or realised results — spending efficiency, service quality, potential harms — that would allow them to make these determinations. Addressing this should be a priority for governments, ensuring AI implementations are transparent, fair and secure.
Many government AI efforts are in their infancy, but some are yielding valuable lessons. The OECD is committed to expanding an evidence base of what works through data collection and analysis, with a focus on how governments can leverage trustworthy AI to deliver public value.