The OECD AI Principles call for AI actors to be accountable for the proper functioning of their AI systems in accordance with their role, context, and ability to act. Likewise, the OECD Guidelines for Multinational Enterprises aim to minimise adverse impacts that may be associated with an enterprise’s operations, products and services. To develop ‘trustworthy’ and ‘responsible’ AI systems, there is a need to identify and manage AI risks. As calls for the development of accountability mechanisms and risk management frameworks continue to grow, interoperability would enhance efficiency and reduce enforcement and compliance costs. This report provides an analysis of the commonalities of AI risk management frameworks. It demonstrates that, while some elements may sometimes differ, all the risk management frameworks analysed follow a similar and sometimes functionally equivalent risk management process.
Common guideposts to promote interoperability in AI risk management
Policy paper
Share
Facebook
Twitter
LinkedIn
Abstract
In the same series
-
Working paper
Evidence from selected countries and the European Union
7 May 202658 Pages -
Working paper
Global linkages and the cross‑country distribution of the gains from AI
18 March 202679 Pages -
Working paper
International insights and policy considerations for Italy
11 December 2025100 Pages -
8 December 202543 Pages
Related publications
-
9 December 202562 Pages
-
Policy brief
Insights from a few countries worldwide
4 December 202512 Pages -
Working paper
Insights from responses to the reporting framework of the Hiroshima AI Process Code of Conduct
25 September 202537 Pages -
14 August 202531 Pages