Looking ahead, adaptive learning systems could continually refine their predictions based on new data they can play a significant role in updating risk assessments dynamically, which will allow integrity bodies to stay ahead of potential evolving threats. AI — particularly its ability to analyse and recognise patterns in non-numeric data — can also be the source of future improvements in anti-corruption. By combining it with sensor and image technology (satellites, drones, aircrafts capturing images or other data such as thermal readings or radar signals), AI creates new opportunities to monitor and analyse patterns on a large-scale to understand corruption-related activities and dynamics (Zinnbauer, 2025[105]).
Further down the line, agent-based modelling (ABM) by modelling entities (officials, businesses, etc.) and their interactions, could simulate how corruption and influence evolve in different conditions (U4, 2025[106]). Doing so could help test the impact of anti-corruption or integrity policies, laws and strategies before implementation.
AI-driven network analysis could also help more efficiently and comprehensively map the relationships between lobbyist, politicians, corporations and other actors, revealing influence patterns, conflicts of interest and revolving-doors dynamics. While integrating AI technologies into public integrity and anti-corruption efforts offers great potential, ensuring their effective and ethical use requires careful attention to key policy considerations that promote trustworthy adoption. Recent OECD (2024[89]) work provided three pieces of advice for integrity actors in exploring the use of AI:
Start by incorporating generative AI into low-risk areas and processes. This approach can help build capacity in areas where mistakes are less costly — either financially or from a compliance perspective — before they scale to riskier and more analytical tasks, including those that require more financial resources.
Consider the IT requirements for both piloting and scaling. Consider what computational and storage resources, data storage and data management capabilities may be needed and make sure early decisions do not overly complicate future plans.
Consider internally generated or open data to demonstrate value and establish quick wins. This approach is low-cost and helps demonstrate use cases faster.
As government continue their exploration of AI, they should establish robust transparency and accountability mechanisms to ensure AI-driven decisions are not only accurate but also appropriately interpreted and documented. For instance, while AI systems can provide valuable insights by processing vast amounts of data, and future systems could be increasingly autonomous, the ultimate authority over sensitive decisions need to remain with human operators to preserve fairness, professional judgement (or scepticism) and ultimately maintain public trust. Public integrity bodies should seek to ensure that: the reasoning behind decisions is understandable both to experts and lay audiences; there are recourse and challenge mechanisms in place; and, to the extent possible, AI processes are explainable. AI-generated evidence needs to meet documentation requirements to provide an appropriate audit trail, including complying with current standards to demonstrate how evidence was obtained and assessed.
Knowledge exchange is also crucial for the successful implementation and scaling of AI technologies in public integrity roles. Institutions should participate in and create platforms that enable the sharing of best practices, strategies, lessons learned and even models, code and data. Sharing will help build a more comprehensive understanding of AI's effectiveness in integrity roles and drive the development of best practices that can be widely adopted. Initiatives like the OECD Anti-Corruption in Government division’s Tech and Analytics Community of Practice or the Tech Connect for Integrity initiative serve as examples.4 These forums bring together public and private sector practitioners to discuss challenges, explore innovative solutions, and develop collective strategies to enhance the adoption of AI and other technologies. This can allow for better flow of information, pooling of resources, rapid learning and faster development and implementation of methodologies and solutions.
Building on existing guidance on the ethical and responsible use of AI, governments should go beyond broad directives and provide more detailed, actionable guidance and frameworks for implementing AI technologies. This may include comprehensive step-by-step instructions, case studies and templates that institutions can tailor to their specific needs. Such guidance could also provide advice on identifying and mitigating bias in data, utilising explainable AI techniques, building transparency into decision-making processes and how to incorporate redress mechanisms.
A unified and interoperable data foundation is essential for the effective use of AI. Governments need to streamline their data operations to ensure that databases and systems can seamlessly communicate and integrate. This involves creating standardised data formats, establishing robust data governance frameworks and investing in infrastructure that supports data interoperability. By promoting data standardisation and integration, as well as policies for safe data sharing and crossing, governments can enhance the ability of AI systems to analyse data comprehensively, identify patterns and provide actionable insights to support public integrity.