There is significant potential to using AI in additional service areas and in novel use cases. For instance, in healthcare, AI could enable tailored and preventative interventions and informed behavioural “nudges” leading to better outcomes and allowing health professionals more time for care. It could also yield new techniques to unlock value from vast health data assets, 97% of which remain untapped in OECD countries (Sumner et al., 2023[242]; Bennett Institute, 2024[243]; OECD, 2024[244]). In this area, the OECD (2017[245]) standard for promoting the use of health data in the public interest guides governments in making this data more available for innovation in health care. Exploration of challenges for the implementation of the principles of this recommendation may be found in (OECD, 2022[246]) and in a forthcoming OECD report: “Facilitating the Secondary Use of Health Data for public Interest Purposes Across Borders”. AI advances could also help ease projected healthcare workforce shortages of 3.5 million by 2030 in OECD countries (OECD, 2024[244]).
AI use in public service design and delivery is still in an experimental phase. While there are examples of automation, streamlining and even new types of services emerging, government organisations are still far from broad adoption and operational use. AI hype and demands for cost efficiency may push governments towards a rapid automation. Yet evaluations and critiques of early practices, as discussed in this section, suggest that a more gradual approach taken together with public servants may be more likely to succeed. Governments need strategies on how to use hybrid models where AI augments human decision-making, and on how to deal with the evolving role of public officials in this new format of algorithmic bureaucracy (Vogl et al., 2019[247]). While the promise of AI may be to free up public servants’ time to higher-value tasks — such as engaging with citizens to tackle complex social issues — it can also have the opposite effect, distancing public servants from society as their role in direct data collection/interaction with service users diminishes (Bullock, Young and Wang, 2020[248]; Madan and Ashok, 2023[194]) (see also the discussion on “AI in civil service reform” in this chapter).
Another emerging approach to automation in public services is rules as code (RaC), which involves encoding government rules into machine-consumable formats alongside traditional legal text (Mohun and Roberts, 2020[4]). As services increasingly rely on AI to support decision-making, RaC provides a foundation for more accurate, transparent and scalable AI applications in government. By embedding rules in digital formats, RaC could enhance the consistency of AI-powered service delivery, ensuring that automated systems interpret and apply regulations correctly across different contexts. Several governments have begun using RaC to streamline regulatory compliance and enhance AI-driven service delivery. For example, the United Arab Emirates has launched a nationwide Rules as Code platform that aims to develop AI-based laws and regulations, transforming the financial ecosystem.10
Unlike conventional task-based approaches — where service delivery is segmented into distinct sequential operations — AI-enabled approaches can integrate functions and decision-making processes simultaneously. This represents a potentially transformative shift in how public services are organised.
When it comes to the broader acceptance of AI-based public services by citizens and residents, a few clear insights emerge from existing research. It is generally accepted that AI is better at processing large amounts of data and identifying patterns (Rane, Choudhary and Rane, 2024[249]). However, a combination of factors including perceived usefulness of AI systems, performance expectancy, attitudes, trust and effort expectancy tend to influence the willingness to use of AI across sectors (Kelly, Kaye and Oviedo-Trespalacios, 2023[250]). Perception of risk and trust in AI systems differs when dealing with general public services (i.e. provided by the government without specific request, and concern all or the majority of citizens, such as basic education and public safety) versus specific public services (i.e. explicitly requested by citizens and impact only one or a few citizens, such as elderly care programmes or housing assistance for low-income populations). General services are more abstract and the users have lower levels of situational awareness meaning that they may also accept AI more easily.11 In specific services, the opportunity to decide becomes relevant; users general want to have a choice whether an AI application has been used in their service decision (Gesk and Leyer, 2022[203]).
Early research also indicates that trust in AI applications is higher if they are integrated into already existing services rather than services that are completely new (Aoki, 2020[251]). This means that more proactive service innovations may have to prove themselves to users and may face more scrutiny. In all cases, participation of users in co-creating public services is correlated to a positive perception of AI decisions and could drive higher adoption (Gesk and Leyer, 2022[203]). This is very important, as people’s experiences with services can influence how they perceive their governments overall, and public satisfaction with administrative and social services is an important driver of trust (OECD, 2024[46]). Public organisations can risk their democratic legitimacy if the public does not trust the services government intend to provide with AI (Aoki, Tay and Yarime, 2024[252]; Aoki, 2020[251]).
Governments also need to better understand and consider user needs to design AI-enabled services they are likely to want to use. This will involve ensuring that policies and services are designed for how people actually behave, rather than how they are assumed to behave. Even well-intended policies can fail when they introduce unnecessary friction ("sludge"), making it harder for people to access services, complete applications or make informed decisions. Further use of behavioural science can assist this by offering practical levers to reduce sludge, improve accessibility and ensure services are user-centred and trusted by citizens. By addressing cognitive barriers — such as complexity, decision fatigue and inertia —governments can design services that are simpler, more intuitive and easier to navigate. AI-powered tools can identify services that need sludge audits, assess intervention effectiveness and improve service design. NLP and sentiment analysis can analyse public feedback in real time, indicating pain points across online platforms and customer service channels. AI can also quantify user interactions and tailor interventions to different demographics, ensuring policies are better targeted and more inclusive. As governments move towards digital-by-design and data-driven governance, integrating behavioural insights with AI will be key to making public services more user-friendly, efficient and responsive (OECD, 2024[253]).
Finally, there is a considerable gap between the speed at which AI is being introduced to public services and the extent to which robust evaluations are carried out. The more thorough analyses already point to many unforeseen and sometimes adverse effects that may appear next to general efficiency gains. This area needs further investment.
In all of these areas, international cooperation in building public sector AI capacity, is crucial. The more government share AI practices across governments in public service development (including sharing open algorithms, infrastructure, intergovernmental datasets, and joint efforts for the responsible development of emerging technologies), the more likely that quality across the board can be assured. Multilateral initiatives such as the Global Partnership on Artificial Intelligence (GPAI) can play a crucial role in this regard.