OECD Due Diligence Guidance for Responsible AI
Glossary
Copy link to GlossaryAI system
Copy link to AI system|
Term as used in OECD standards and reports referred to in this guidance |
The definition for an AI system is from the OECD Recommendation on AI that was updated in 2023 (OECD, 2024[6]). The Recommendation defines an AI system as a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment. The reasoning behind this definition is explained in detail in a memorandum published by the OECD (2024[2]). |
|
Term as used by other risk management frameworks |
The EU AIA, US NIST RMF, ASEAN Guide, and Council of Europe Convention all either directly adopt, refer to, or only slightly deviate from the definition used in the OECD Recommendation on AI. |
|
Differences in terminology and application for this guidance |
For the purposes of this guidance, an AI system can be understood according to the definition used in the OECD Recommendation on AI. |
Assess / Measure / Evaluate
Copy link to Assess / Measure / Evaluate|
Term as used in OECD standards and reports referred to in this guidance |
The Interoperability Framework describes this step in the process as discovering risks, analysing the mechanisms by which those risks may occur, and evaluating their likelihood of occurring as well as their severity. The OECD DDG uses the same term in much the same way, but also recommending that businesses assess their linkage or contribution to impacts caused by business relationships. Included in this step is also the risk prioritisation process, where businesses prioritise the most significant risks and impacts for action, based on severity and likelihood. |
|
Term as used by other risk management frameworks |
Other risk management frameworks such as the NIST RMF, ISO 31000, IEEE 7000-21, and ISO/IEC Guide 51 refer to this step in the process as either measuring or evaluating. Under the NIST RMF, measuring risks includes using metrics for trustworthy characteristics and social impact to track risks. The UNGPs are aligned with the MNE Guidelines and OECD DDG (see Principles 17 and 24). |
|
Differences in terminology and application for this guidance |
These instruments use the terms in similar ways but with slightly different scopes. For example, the assessment could be in relation to trustworthiness objectives, to involvement with the impact (e.g., cause, contribute or directly linked), or to risks for the company. For the purposes of this guidance, the term ‘ASSESS’ can include all of these aspects. |
* The Interoperability Framework is derived from the paper on Advancing Accountability in AI which sets out common high-level steps for AI risk management that appear across multiple frameworks (OECD, 2023[17]).
Define / Identify / Map / Scope
Copy link to Define / Identify / Map / Scope|
Term as used in OECD standards and reports referred to in this guidance |
The Interoperability Framework* recommends to ‘Define’ the scope and context of an AI system and the criteria for evaluating risk e.g., at the governance level, process level, and/or technical level. The equivalent term used in the OECD DDG is ‘Identify’ and is essentially a broad scoping exercise to identify all areas of the business, across its operations, products and services, and also its relationships where risks are most likely to be present and significant. |
|
Terms as used by other key risk management frameworks |
Other risk management frameworks such as the NIST RMF and ISO 31000 refer to this process as ‘Mapping’ and ‘Scoping’, respectively. Similar to the Interoperability Framework, the NIST RMF offers specific recommendations on what needs to mapped, e.g., the purpose and uses of the AI system, risks relating to components of the AI system, including its software and the data it uses, as well as its impacts on individuals, groups and society. The UNGPs are aligned with the MNE Guidelines and OECD DDG (see Principle 15). |
|
Differences in terminology and application for this guidance |
Different instruments use the terms ‘define’, ‘identify’, ‘map’ and ‘scope’ to generally refer to the same broad set of actions. The ‘MAP’ function of the NIST RMF seems to be the most specific and applicable to AI systems. Likewise, all the risk management frameworks describe the ‘Define” (or ‘Identify’ / ‘Map’ / ‘Scope’) as a distinct step in the risk management process that is essential as a foundation to assess risks and impacts. For the purposes of this guidance, the term ‘DEFINE’ is considered to include all of these aspects. |
Due diligence
Copy link to Due diligence|
Term as used in OECD standards and reports referred to in this guidance |
Due diligence is understood in the MNE Guidelines as the process through which enterprises can identify, prevent, mitigate and account for how they address their actual and potential adverse impacts as an integral part of business decision-making and risk management systems. The MNE Guidelines outline the following measures as part of a due diligence process: 1. embedding RBC into policies and management systems; 2. identifying and assessing actual and potential adverse impacts associated with the enterprise’s operations, products or services; 3. ceasing, preventing and mitigating adverse impacts; 4. tracking implementation and results; 5. communicating how impacts are addressed; and 6. providing for or co-operating in remediation when appropriate. |
|
Term as used by other risk management frameworks |
The UN Guiding Principles on Business and Human Rights (UNGPs), developed alongside the 2011 update of the MNE Guidelines and are aligned with the latter on the term due diligence. The UNGPs define due diligence as a process that includes “assessing actual and potential human rights impacts, integrating and acting upon the findings, tracking responses, and communicating how impacts are addressed” (OECD, 2023[17]). Likewise, the UNGPs also extend the due diligence expectation to impacts that enterprises cause, contribute to or are directly linked to through their business relationships. Other risk frameworks usually refer to or adapt the ISO 31000 definition of “risk management” to refer to processes similar or related to the due diligence expectations of the MNE Guidelines. IS) 31000 provides that “[r]isk management refers to coordinated activities to direct and control an organisation with regard to risk.” |
|
Differences in terminology and application for this guidance |
Generally, the concepts related to due diligence are aligned across frameworks. The key difference between the terminology is the explicit reference to due diligence pertaining to enterprises’ business relationships in the MNE Guidelines and UNGPs, rather than focusing solely on their own operations, products and services. For the purpose of this guidance, due diligence can be understood as the more expansive term used in the MNE Guidelines / OECD DDG and UNGPs, which includes management of risks in an organisation’s own operations, products, and services. as well as in an organisation’s business relationships. |
Monitor and review / Track
Copy link to Monitor and review / Track|
Term as used in OECD standards and reports referred to in this guidance |
‘Monitoring and reviewing’ are understood as a continuous activity to check risks and the steps taken to treat them. The equivalent term used in the OECD DDG is “Track”. Specifically, “tracking” implementation and effectiveness and outcomes of due diligence activities (i.e., measures to identify, prevent, mitigate and remedy impacts), including with business relationships. Tracking is conducted on a periodic basis and can come in the form of independent, third-party audits. |
|
Term as used in other key risk management frameworks |
Other frameworks refer to monitoring and review in similar ways. In some cases, it is a distinct step of the due diligence process (e.g., in ISO 31000) and in other cases, it is part of a broader step focused on internal governance (e.g., in the NIST Risk Management Framework (NIST RMF)). The UNGPs are aligned with the MNE Guidelines and OECD DDG (see Principles 17 and 20). |
|
Differences in terminology and application for this guidance |
Generally, the concepts are equivalent across frameworks. |
Risks / Incidents / Hazards
Copy link to Risks / Incidents / Hazards|
Term as used in OECD standards and reports referred to in this guidance |
OECD RBC instruments describe adverse impacts or potential adverse impacts (i.e., risks) in the context of topics covered in the chapters of the MNE Guidelines: human rights, including workers and industrial relations, environment, bribery and corruption, disclosure, and consumer interests. The MNE Guidelines refer to risk as the likelihood of adverse impacts on people, the environment and society that enterprises cause, contribute to, or to which they are directly linked. In other words, it is an outward-facing approach to risk. The likelihood of adverse impacts increases in situations where an enterprise’s behaviour or the circumstances associated with their supply chains or business relationships are not consistent with the recommendations in the MNE Guidelines A risk of adverse impacts may exist when there is the potential for behaviour that is inconsistent with the recommendations in the MNE Guidelines because it involves impacts that may occur in the future. In a separate, but related workstream on monitoring AI incidents, the OECD Network of Experts is working towards developing a definition of an AI incidents and hazards (OECD, 2024[34]), which are defined as follows: An AI incident is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems directly or indirectly leads to any of the following harms: (a) injury or harm to the health of a person or groups of people; (b) disruption of the management and operation of critical infrastructure; (c) failure to respect human rights or a breach of obligations under the applicable law intended to protect labour and intellectual property rights; (d) harm to property, communities or the environment. An AI hazard is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems could plausibly lead to an AI incident, i.e., any of the following harms: (a) injury or harm to the health of a person or groups of people; (b) disruption of the management and operation of critical infrastructure; (c) failure to respect human rights or a breach of obligations under the applicable law intended to protect labour and intellectual property rights; (d) harm to property, communities or the environment. |
|
Terms as used by other risk management frameworks |
The EU AIA refers to risks as the combination of the probability of an occurrence of harm and the severity of that harm (European Union, 2024[14]), with “harm” being understood as harm to public interests and fundamental rights that are protected by European Union law. Such harm might be material or immaterial, including physical, psychological, societal or economic harm. The US NIST RMF adopts the same approach as ISO 31000 in framing risk as both positive and negative (US National Institute of Standards and Technology, 2023[35]) (ISO, 2018[36]). It refers to risk as the composite measure of an event’s probability of occurring and the magnitude or degree of the consequences of the corresponding event. The impacts, or consequences, of AI systems can be positive, negative, or both and can result in opportunities or threats. |
|
Differences in terminology and application for this guidance |
The definition of risk in the MNE Guidelines is broader in scope in that it includes impacts that enterprises “can cause, contribute to or to which they are directly linked.” Essentially, this expands relevant risks and impacts beyond own operations and into the realm of impacts and risks associated with business relationships in the value chain. The scope of specific risks covered also varies across frameworks. For the purpose of this guidance, risk can be understood using the definition from the MNE Guidelines as it includes AI Incidents and hazards, but is more expansive for the purposes of value chain due diligence. Where relevant and more specific, the guidance refers to incidents and hazards. |
Stakeholders
Copy link to Stakeholders|
Term as used in OECD standards and reports referred to in this guidance |
The OECD Recommendation on AI defines stakeholders as all organisations and individuals involved in, or affected by, AI systems, directly or indirectly. The OECD MNE Guidelines describes relevant stakeholders as persons or groups, or their legitimate representatives, who have rights or interests related to the matters covered by the MNE Guidelines that are or could be affected by adverse impacts associated with the enterprise’s operations, products or services. |
|
Term as used by other risk management frameworks |
Other frameworks use the term stakeholders to generally refer to organisations and individuals. In different contexts, “relevant stakeholders” usually refers to users of the AI system, civil society, workers’ representatives, SMEs, and other enterprises. |
|
Differences in terminology and application for this guidance |
For the purposes of this guidance, stakeholders should be understood in the broadest sense as persons, groups or organisations involved in or affected by AI systems and the enterprises involved in their development and use. |
Treat / Cease, prevent, mitigate and remedy / Manage
Copy link to Treat / Cease, prevent, mitigate and remedy / Manage|
Term as used in OECD standards and reports referred to in this guidance |
The Interoperability Framework defines ‘risk treatment’ as using techniques to prevent, mitigate or cease risks, based on their likelihood and impact. The OECD due diligence recommendations take a similar approach to risk treatment, but further specifies the types of action to be taken, based on a company’s responsibility in causing the risk (e.g., whether they caused, contributed or were directly linked to the risk). Companies are expected to cease, prevent, and/or mitigate identified risks and impacts. In circumstances where a company is contributing to or directly linked to an impact through a business relationship, it should seek, to the extent possible, to use its leverage, individually or in collaboration with others, to effect change. When a company has caused or contributed to an adverse impact, the company is expected to provide for, or co‑operate in, remediation (see further discussion on the term “REMEDY” below). |
|
Term as used by other risk management frameworks |
The NIST RMF refers to risk treatment under its MANAGE function as “plans to respond to, recover from, and communicate about incidents or events.” Although it uses different terminology, the set of actions under this function is generally consistent with OECD due diligence recommendations. Expected actions include prioritising and allocating resources to risk management, engaging with impacted stakeholders, and continuously monitoring and documenting risk management efforts. The NIST RMF also notes that risk response options can include mitigating, transferring, avoiding, or accepting. |
|
Differences in terminology and application for this guidance |
For the purposes of this guidance, ‘Treat’ can be understood to mean ceasing, preventing or mitigating risk that companies cause, contribute to, or are directly linked to through their business relationships. |