AI offers tremendous potential in its use by governments. It helps governments automate and tailor public services, improve decision-making, detect fraud, and enrich civil servants’ work and learning. However, benefits also hinge on managing risks: skewed data in AI systems can cause harmful decisions; lack of transparency erodes accountability; and overreliance can widen digital divides and propagate errors, reducing citizen trust. These trade-offs need to account for governments’ specific challenges where adoption trails some firms in the private sector, slowed by skill gaps, legacy IT systems, limited data, tight budgets, and stricter needs for privacy, transparency, and representation.
Governing with Artificial Intelligence
Introduction
Key figures
200
AI use cases analysed for this report
57%
of use cases support automated, streamlined or tailored processes and services – the most common goal among governments
15%
of governments in 2023 with an AI investments framework, which could help address common challenges
AI use and maturity vary across government functions and countries
Government adoption of AI began relatively recently, lagging behind some private firms. Early data suggest government adoption of AI is most prominent in areas such as public services, civic participation, and justice – areas with high transaction volumes and direct citizen interaction. Areas such as policy evaluation, tax administration, and civil service management have seen more limited adoption.
While the use of AI by governments can provide significant benefits, it also poses potential societal and economic risks
The report finds that 57% of cases support automating, streamlining or tailoring services and 45% of cases enhance decision making, sense making or forecasting, while 30% aim to improve accountability and anomaly detection. Only 4% allow external actors to use government AI to achieve their own goals, such as Greece’s DidaktorikaAI, an AI-powered library of 50 000 publications. Contrary to expectations, AI is used more for analytical than mundane tasks. All cases could carry risks if not adequately managed, including ethical risks (e.g. rights infringements), operational risks (e.g. cyber threats), widening digital divides, and public resistance to government AI.
Governments face a variety of challenges in adopting AI
Challenges faced in scaling successful AI mean many government initiatives remain in the pilot phase. A key contributing factor being a lack of impact measurement frameworks to demonstrate return on investment and thereby prioritise further investment in AI. Skills gaps and issues with accessing and sharing quality data are widespread across governments. While national AI strategies are becoming more common, a lack of concrete guidance hinders implementation. These gaps increase risk aversion and limit innovation. Financial costs, outdated laws and regulations, and legacy IT systems also pose barriers.
What can governments do?
Enablers are the foundational elements needed for effective AI in government. They support reliable design and deployment by skilled public servants, helping institutions harness AI’s full potential. This report identifies seven key enablers: governance, data, digital infrastructure, skills, investment, procurement, and partnerships with non-government actors.
Guardrails ensure trustworthy AI use in government through policies, transparency, and oversight. They manage risks, uphold legal and social values, and build public trust. However, not every guardrail needs to apply to every use case. To avoid risk aversion and inaction, governments should adopt context-appropriate guardrails in a proportionate, risk-based manner.
Governments should design AI systems that consider the needs of all actors. This requires user-centred, adaptive approaches and robust engagement with key stakeholders – such as the public, civil society, businesses, and across borders – through open and transparent mechanisms.
The future application of AI remains unknown. Governments need agile, adaptive strategies and flexible frameworks to respond. Spotting weak signals early helps guide timely interventions before trends become locked in and hard to shift.
The OECD Framework for Trustworthy AI in Government provides guidance to implement these recommendations.
Related publications
-
5 May 20264 Pages -
5 May 20264 Pages