Social security administrations across the European Union (EU) are facing challenges in service and benefit delivery, including high administrative burdens, fragmented data systems, and complex eligibility processes. Access to critical benefits such as pensions, family allowances, and disability benefits, is hindered by factors including lack of information, administrative complexity, and fragmented data systems. This not only affects individual well-being but also reduces the impact of public investment and can reduce trust in social protection systems.
When used in a trustworthy and effective manner, artificial intelligence (AI) can help public institutions better identify needs, personalise communication, support back-office processes and deliver services, including social security, more proactively and efficiently. However, without the right governance, controls, and oversight in place, there is also a risk that AI systems cause harm. Social security is therefore one of the specific areas highlighted as high-risk under the EU's AI Act, especially where AI systems are intended to evaluate the eligibility of social protection.
Realising the benefits of AI systems in the social security sector thus requires an AI-ready workforce, strong governance, appropriate safeguards, and meaningful engagement with users and stakeholders. With the right foundations in place, AI systems can support more effective and user-centred public services.
Chapter 1 presents three case studies of AI use in the public sector, exploring how AI can increase efficiency, improve accuracy, and enhance citizen access to benefits. In Catalonia, an AI-powered cloud platform automates the process of determining eligibility for energy poverty support, by improving data sharing and enhancing clarity of what benefits accrue to whom. In Germany, the Federal Employment Agency uses a machine learning tool to categorise job postings from unstructured formats, speeding up the dissemination of job offers to potential applicants. In Finland, the national social security institution, Kela, has developed an AI platform hosted on-site to automate the processing of millions of documents annually. These documents, attached by customers to various benefit demands, are now categorised and processed automatically, delivering cost, time, and environmental savings.
Unlocking the potential of AI in the social security sector requires strong governance built around the three key pillars of the OECD Framework for Trustworthy AI in Government: enablers, guardrails, and engagement. Chapter 2 looks at governance frameworks and strategies for AI at the national level, and how social security institutions can build on these efforts to ensure that the use of AI in the social security sector is effective, trustworthy and human-centred. Key enablers such as infrastructure, data quality, and strategic investment approaches are needed to foster a more coherent co-ordination between central governments and social security institutions on the use of AI systems in the public sector. Guardrails, including ethical oversight and compliance with the EU AI Act, are emerging but must be further embedded across the AI lifecycle to ensure safety and fairness. This is particularly relevant in high-risk social security applications. Institutions such as the German Federal Employment Agency or Kela in Finland have developed internal guidelines, but in most cases they remain voluntary. Finally, it is critical to engage with diverse stakeholders across the design, development, testing, and oversight of AI systems to ensure that they meet user needs, do not have (harmful) unintended consequences, and help build public trust. While engagement with public servants is improving, broader involvement of service users, civil society, and cross-border actors remains limited—hindering trust and effectiveness. Further work is needed to tailor these approaches to the sector’s specific needs, ensuring ethical, effective, and inclusive AI integration.
Finally, in Chapter 3, the report highlights that the adoption of AI in social security institutions, and in public administrations more broadly, also requires a workforce that is able to develop, implement, and use AI tools. AI adoption will have an impact on the workforce and skills needs in public administration, augmenting human tasks and transforming organisational processes. Workforce development ‒ spanning recruitment, outsourcing, training, and the organisational culture ‒ should therefore be an integral part of any institutional strategy for AI. Governments can use different tools to improve the recruitment and retention of in-demand digital and data talent, and provide training tailored to different staff profiles, from general employees to digital and data professionals and leaders. Outsourcing is a common practice to address skills gaps in AI, however, building in-house AI capabilities has a range of advantages, including accountability, data privacy, and the public service mission of social security institutions. Cultivating a strong culture of innovation and learning is key to supporting an effective and trustworthy AI adoption in the public sector.
Realising the benefits of AI in social security requires more than deploying new AI tools – it demands a comprehensive, values-driven approach that invests in people, processes, and public trust. By focusing on strong enablers, ethical guardrails, and meaningful engagement, governments can harness AI to modernise service delivery, close access gaps, and ensure that social protection systems are resilient, inclusive, and ready for the future.