AI tools are used across various public bodies with a role in governance of critical risks, including disaster management, law enforcement and intersecting areas such as counterterrorism, customs and border management. Relevant agencies use AI technologies to enhance the scale, speed and accuracy of capabilities for anticipatory analysis, real-time surveillance and monitoring and administrative processes while lowering operational costs. This can enhance investigation and crisis management capabilities, optimise resource allocation and improve response times.
Because AI use has high-impact potential in this space, especially with regard to law enforcement, these entities are considered high-risk AI end users (OECD, 2020[254]) and face additional considerations when tapping into AI for public good and safety. Above all other types of public institutions, citizens place the highest levels of trust in police (OECD, 2024[46]). Relevant agencies need to uphold this trust by ensuring AI is adopted in a trustworthy manner that mitigates ethical challenges and risks to the protection of personal rights, including through vigilance regarding how underlying data are sourced, maintained and applied (OECD, 2020[255]; 2019[191]; 2022[256]). Lessons can be derived from existing use cases, and as discussed below, some governments have put in place heightened accountability expectations for AI use by these actors.