This chapter introduces the concept of RBC due diligence and provides an overview of the broader AI risk management policy landscape. It also describes the target audience and how to use this guidance as a tool to navigate risk management frameworks.
OECD Due Diligence Guidance for Responsible AI
1. Introduction to RBC due diligence and key considerations for AI
Copy link to 1. Introduction to RBC due diligence and key considerations for AIAbstract
Introduction to responsible AI
Copy link to Introduction to responsible AIAI development has the potential to transform society in ways comparable to the industrial revolution or the advent of the internet. AI represents not merely an incremental advance but a transformative technology with the capacity to enhance productivity, create economic value, and solve complex challenges across a variety of sectors such as healthcare, manufacturing, logistics, and public administration. To harness this positive potential, the OECD sets out a balanced approach to responsible AI that enhances the opportunities of AI and establishes conditions for AI to be more profitable, innovative and competitive, while addressing risks of adverse impacts.
Responsible AI also depends on data suppliers, finance, and physical infrastructure as much as digital innovation. The OECD Guidelines for Multinational Enterprises on Responsible Business Conduct (“MNE Guidelines”) (OECD, 2023[1]) and the OECD Recommendation on Artificial Intelligence (“OECD Recommendation on AI” or “AI Principles” as relevant) (OECD, 2024[2]) also emphasise the role of all enterprises involved in the development and use of AI systems. This “whole-of-value chain” approach will support secure and resilient AI value chains, more resistant supply chain shocks and interference.
Critically, responsible AI development and use should proceed with stakeholder engagement and workers at the centre of consideration, viewing the technology as an enhancement to human capability rather than a replacement for human labour. By ensuring meaningful engagement with workers and other stakeholders, enterprises can guide AI toward applications that supplement human work rather than automate it away.
Proactively addressing potential harms associated with AI systems creates a foundation of trustworthiness that can significantly accelerate market growth and investment. When enterprises demonstrate commitment to responsible AI development and use – through the best practices described in this guidance and other OECD instruments – they build confidence among investors, customers, regulators, and policymakers. This trustworthiness translates into competitive advantage. Enterprises that establish reputations for responsible practices can more readily access capital markets, attract premium business relationships, and navigate regulatory landscapes with greater ease. Far from hindering innovation, responsible AI can actually accelerate growth by reducing friction and preventing costly reputational and societal damage that might otherwise occur.
Responsible and trustworthy AI is becoming increasingly crucial for accessing global markets as international regulatory frameworks continue to evolve. Companies that proactively integrate harm prevention into their AI development and adoption processes, position themselves advantageously for cross-border expansion, potentially avoiding the substantial costs of retrofitting systems to meet various regional requirements. This forward-looking approach can transform what might otherwise be viewed as compliance costs into strategic investments that yield returns through expanded market access.
The economic case for responsible and trustworthy AI becomes particularly compelling when considering that enterprise customers increasingly include AI risk management in their procurement processes, making trustworthiness not merely an ethical consideration but a business requirement. In this way, addressing AI harms serves both ethical imperatives and business interests simultaneously, creating a virtuous cycle where responsible innovation drives both societal benefit and commercial success.
Purpose of this guidance
Copy link to Purpose of this guidanceThis guidance aims at supporting enterprises in their implementation of the MNE Guidelines and the AI Principles1. This guidance is intended to be used as a tool for multinational2 enterprises involved in the AI system value chain (i.e., the supply inputs for the development of AI systems, play an active role in the AI system lifecycle or that use AI systems in their operations, products and services, across all sectors).
The objectives of this guidance are to:
support innovation, investment and growth of enterprises in the AI value chain by providing clarity on how enterprises can proactively identify and address actual and potential adverse impacts (i.e., risks) that they may cause, contribute to or be directly linked to and harness the positive contributions of AI to society related to topics covered in the MNE Guidelines and AI Principles
help enterprises navigate existing international, national, multi-stakeholder or industry-led AI risk management and governance frameworks
promote policy coherence, and where possible interoperability, between the MNE Guidelines, AI Principles and other national or international AI risk management and governance frameworks
serve as a common reference point for AI risk management frameworks across different jurisdictions.
The OECD due diligence framework described in the MNE Guidelines and elaborated on in the OECD Due Diligence Guidance for Responsible Business Conduct (“RBC Guidance”) (OECD, 2018[3]) serves as the foundation for this guidance. The MNE Guidelines and related OECD RBC standards provide voluntary principles for responsible business conduct. Matters covered by the MNE Guidelines may be the subject of domestic law and international commitments. Importantly, OECD RBC standards are aligned and complementary of the UN Guiding Principles on Business and Human Rights (“UNGPs”) (United Nations Office of the High Commissioner on Human Rights, 2012[4]) and the ILO Tripartite Declaration of Principles concerning Multinational Enterprises and Social Policy (ILO, 2023[5]). This guidance seeks to translate the high-level framework contained in the RBC Guidance into concrete and practical actions for enterprises to identify, prevent, mitigate and remedy actual and potential adverse impacts related to the development and use of AI systems.
To support global cooperation and policy coherence for trustworthy AI, and contribute to interoperability where appropriate, this guidance draws on existing international and national AI-specific risk management frameworks, regulations and other initiatives, to offer practical examples for implementing RBC guidance in the AI context.
By providing a resource for enterprises on how to practically implement the MNE Guidelines while demonstrating consistency and coherence with the AI Principles, the guidance aims to support enterprises operating across multiple jurisdictions and subject to multiple regulatory requirements or engaged in multiple voluntary initiatives to respond to such expectations. This will help companies remain trusted by consumers and will ensure they have the freedom to innovate and be competitive in the global market.
Target audience
Copy link to Target audienceThe MNE Guidelines recommend that enterprises carry out due diligence to identify and address any actual and potential adverse impacts (i.e., risks) that they: (1) might cause or contribute to through their own operations; or (2) might contribute to or be directly linked to through their business relationships.
The primary audience of this guidance is composed of enterprises in three groups described in more detail below. They include enterprises involved in the AI system lifecycle3 as described in the AI Principles (i.e., the planning and design; data collection and processing; model building and/or adaptation; testing, evaluation, verification, and validation; deployment; and operation and monitoring AI systems). It is also addressed to enterprises involved in supplying digital, physical and financial inputs for the development of AI systems (e.g., data annotation services, compute providers, cloud service providers, hardware manufacturers and investors), as well as the sale, licensing, trade, and use of AI systems, as described in the MNE Guidelines and in line with the revised Recommendation on AI. This includes enterprises outside of the “technology sector” that use AI systems in their operations, products and services.
While the framework in this guidance might be relevant for the development and use of other software and technologies, this guidance specifically focuses on AI systems. AI systems are clearly defined by the Recommendation on AI and explained in detail in an accompanying memorandum (OECD, 2024[6]).
Roles in the AI value chain and business relationships between different actors are non-linear and overlapping. Likewise, the process for AI development does not progress linearly. For example, some developers of AI systems may also be the same companies that design and manufacture hardware or might be involved in gathering data and annotating datasets. To understand RBC due diligence in the context of the development and use of AI systems, this guidance describes enterprises’ due diligence responsibilities by categorising them into different groups according to the activity that they perform.4
The three groupings below are therefore not rigid nor exclusive, but intended to inform how enterprises performing different activities should approach due diligence. Enterprises may conduct activities that would place them in multiple groups and should tailor their due diligence approach accordingly. Likewise, some enterprises’ due diligence efforts will prioritise risks of adverse arising out of their own operations, while others will prioritise risks arising from their business relationships.
Adverse impacts may be related to the activities described in each of the groups. While most of the examples contained in this guidance cover how to address adverse impacts related to activities listed in Group 2, enterprises in Groups 1 and 3 are still expected to conduct due diligence to address adverse impacts that they may cause through their own operations, products and services.
Group 1: Suppliers of AI inputs
Enterprises in this group are suppliers of AI inputs. They provide inputs into the development of the AI system and are generally considered to be the ‘upstream’ segment of the AI system value chain. This includes activities pertaining to the provision of inputs in the AI ecosystem, (i.e., the skills and resources, such as data, code, algorithms, models, research, know-how, training programmes, governance, processes, and best practices required to understand and participate in the AI system lifecycle, including managing risks). They include enterprises involved in:
data provision and data annotation
dataset creation and curation
developing, adapting, or providing code for third-party use, including contributions to open-source libraries and software components for AI development
development of metrics and evaluation measures
It also includes activities related to the provision of financial, logistical, administrative, and hardware inputs needed to support the development of the AI system. They include enterprises involved in:
the provision of capital (e.g., financial institutions, venture capital, and other providers of capital)
the provision of digital infrastructure and administrative services (e.g., compute providers, cloud service providers, digital payment platforms, digital labour platforms, operating systems, app stores, security software providers, and enterprise software providers)
the provision of hardware (e.g., semiconductor manufacturers and distributors, network equipment vendors, and other hardware manufacturers).
This guidance does not cover supply chains of hardware inputs (e.g., mining of raw materials and manufacturing of hardware components) which is the subject of separate RBC due diligence guidance.5
Group 2: Enterprises actively involved in the design, development, deployment, and operation of AI systems
Enterprises in this group include enterprises involved in the AI system lifecycle activities listed below. Understanding the AI system lifecycle can help all enterprises better identify risks and interact with business relationships in Group 2. These include enterprises involved in:
planning and design of the system
building the model and/or adapting an existing model
testing, evaluating, verifying and validating the model and systems
deploying6 the system, regardless of the distribution channel (including the distribution of open-source software)
operating the system for customers and monitoring the system.
Group 2 could also include enterprises that modify and re-deploy existing AI models for enterprise-specific use cases.
Group 3: Users of the AI system
Enterprises in this group use AI systems in their operations, products and services, are generally considered to be the ‘downstream’ segment of the AI system value chain. These include financial institutions and enterprises in the ‘real economy’ (i.e., manufacturers and sellers of goods and services, including unrelated to AI systems or technology).
Enterprises in this group should consider due diligence on the AI systems that they use as part of their broader due diligence process across their operations and business relationships. This means prioritising the risks of adverse impacts presented by the AI system in relation to other risks the enterprise might be causing, contributing to or linked to in their specific sectors. For example, if the AI system is not related to significant risks, then the enterprise might prioritise other RBC topics for action, including those not related to AI systems.
Box 1.1. Considerations for Small and Medium Sized Enterprises (SMEs)
Copy link to Box 1.1. Considerations for Small and Medium Sized Enterprises (SMEs)AI systems have the potential to deliver vast economic benefits to SMEs, including through access to tools that could allow for increased efficiency at lower costs. SMEs also play a critical role at multiple stages in the development of AI systems. Under RBC standards, SMEs are expected, like other enterprises, to carry out due diligence.
RBC standards also recognise, however, that SMEs may not have the same capacity to implement due diligence expectations as larger enterprises. SMEs that are in initial stages of research and development, proof-of-concept and funding function with limited resources, and tend to allocate such resources to more immediate, practical needs for commercialisation of their products or services. Thus, generally speaking, SMEs may face RBC implementation challenges relating to engaging stakeholders, exercising leverage over business relationships, and bearing the costs necessary to take risk prevention and mitigation measures.
To address these challenges, and to promote implementation of standards on RBC, the guidance particularly encourages SMEs, where possible, to use collaborative approaches and engage in industry initiatives to pool resources (in line with competition law), reduce the costs of due diligence and facilitate access to and harmonisation of information on risks of adverse impacts. RBC standards also recognise that the nature and extent of due diligence should be proportionate to the size of the enterprise, its involvement with an adverse impact and the severity of the adverse impact. In acknowledging these realities, the MNE Guidelines seek to ensure that SMEs are not subject to undue burdens, allowing them to focus on the most relevant risks within their capacity. Furthermore, they encourage larger enterprises to prioritise engaging with SME business relationships to support their due diligence processes.
In addition to meeting international expectations, implementing RBC standards may open new markets or enable better access to finance for SMEs. It may help in acquiring or retaining staff. Similarly, it may prove pivotal in integrating into value chains as larger business relationships increasingly face RBC due diligence practices.
SMEs can leverage cooperation networks (e.g., regional AI and digital transformation initiatives1 and regional RBC initiatives2) to seek further technical support and clarity when implementing RBC and AI standards.
Notes:
1. See e.g., the OECD-African Union AI Dialogue on AI
2. See e.g., the OECD global engagement programmes in Asia, MENA and Latin America.
Other relevant audiences
The MNE Guidelines also have a unique promotion and remedy mechanism that relies on National Contact Points for RBC (“NCPs”) 7. This guidance can also be a useful resource for NCPs in promoting the MNE Guidelines and informing decisions related to accountability for alleged violations of the MNE Guidelines.
This guidance may also be useful for those developing standards related to responsible AI such as policymakers, regulators, industry-led and multi-stakeholder initiatives that seek to support alignment with international standards.
Other relevant audiences may include civil society organisations, workers, workers’ representatives, trade unions, industry associations, and national regulatory authorities, including data protection authorities and sectoral oversight bodies.
Special attention should be given to the implementation challenges faced by enterprises and governments in developing countries, including the need for capacity-building, technical assistance, and differentiated guidance adapted to local regulatory and institutional realities.
Finally, this guidance is also relevant for individuals and groups and their representatives that have been or may be adversely impacted by an AI system.
Understanding the risks related to the development and use of AI
Copy link to Understanding the risks related to the development and use of AIThe development and use of AI systems have the potential to positively impact matters covered by the MNE Guidelines. For example, use of AI systems can unlock significant improvements in occupational health and safety through automation of dangerous tasks. In public administration, the use of AI in smart grids, smart cities and connected devices can help predict infrastructure maintenance requirements and direct traffic flows to reduce road congestion. The ability of AI to quickly analyse enormous amounts of data, recognise patterns, and build predictive models make it an important tool to detect financial crime, to combat kidnapping and human trafficking, to identify situations of bonded or child labour, and to analyse crime scenes. More broadly, the use of AI systems presents opportunities for innovation, economic growth, and the promotion of human rights.
In order to achieve these positive benefits, it is important that risks of adverse impacts associated with AI systems are effectively managed.8 When considering the entire AI system value chain, a larger scope of risks may be relevant. For example, the significant computing power used to train and use some types of AI systems have had a demonstrated impact (OECD, 2022[7]), and some services performed by humans, such as data enrichment services, have resulted in harmful labour practices (Partnership on AI, 2021[8]). Likewise, as with many new technologies, public and private malign actors may find ways to exploit AI systems. The significant dual-use potential of AI systems and ability to repurpose AI systems can lead to harmful uses even when their design was intended to be innocuous.
The MNE Guidelines recognise that enterprises should carry out risk-based due diligence with respect to actual and potential adverse impacts of their activities related to technological innovation. They also recognise that enterprises involved in the development of new technology or new applications of existing tools should anticipate adverse impacts and challenges raised by technologies, while promoting responsible innovation.
Multiple frameworks exist at the international, regional and national level that describe risks related to the development and use of AI systems and recommend actions companies should take to address those risks. The scope of risks covered in these frameworks varies. While the list of frameworks can inform a range of potentially relevant risks for an enterprise’s due diligence efforts, it is non-exhaustive and many of the risks overlap and may be linked to each other. Likewise, not all frameworks are relevant for every enterprise. Each enterprise is expected to identify its priority risk areas based on its individual circumstances, including additional risks not listed here. This guidance takes a risk-agnostic approach to remain evergreen. Future research can be used to complement this guidance as policy views and understanding about risks related to AI systems continue to evolve.
Characteristics of trustworthy AI
This guidance is also intended to enable the responsible stewardship and development of trustworthy AI systems. In the context of this guidance, the term “trustworthy AI” refers to AI systems that embody the OECD’s AI Principles, updated in 2024 (OECD, 2024[9]). These are not just principles, but outcomes against which to both assess risk and identify mitigation/prevention responsibilities.
Basics of RBC due diligence
Copy link to Basics of RBC due diligenceThe RBC due diligence framework
The MNE Guidelines sets out a voluntary due diligence framework for enterprises that governments have committed to actively promote and implement. It outlines the following measures:
1. embedding responsible business conduct into policies and management systems
2. identifying and assessing actual and potential adverse impacts associated with the enterprise’s operations, products or services
3. ceasing, preventing and mitigating adverse impacts
4. tracking implementation and results
5. communicating how impacts are addressed
6. providing for or co-operating in remediation when appropriate.
Figure 1.1. Graphical representation of the RBC due diligence framework
Copy link to Figure 1.1. Graphical representation of the RBC due diligence framework
Source: OECD (2018[3]), OECD Due Diligence Guidance for Responsible Business Conduct, https://doi.org/10.1787/15f5f4b3-en.
These steps are meant to be simultaneous and iterative, as due diligence is an ongoing process that is both proactive and reactive. The steps are described in more detail and contextualised for the development and use of AI in Chapter 2 of this guidance.
The RBC due diligence framework is broadly aligned with other AI risk management frameworks (OECD, 2023[10]) and there is significant overlap across many of the frameworks on key issues. Each of the steps of this guidance points directly to related requirements in existing AI risk management frameworks and can therefore support cross-referencing and coherent implementation of requirements across frameworks and jurisdictions. By meaningfully implementing the recommendations of the other existing AI risk management frameworks enterprises can observe many of the expectations of the RBC due diligence approach. In some cases, the RBC framework provides additional clarity and closes gaps in other frameworks particularly with respect to stakeholder engagement and remediation, which are less comprehensively addressed in existing frameworks.
Relationship with legal obligations
The MNE Guidelines provide voluntary principles and standards for RBC consistent with applicable laws and internationally recognised standards. Matters covered by the MNE Guidelines may be the subject of domestic law and international commitments. The MNE Guidelines outline recommendations on RBC that may go beyond what enterprises are legally required to comply with. The recommendation from governments that enterprises observe the MNE Guidelines is distinct from matters of legal liability and enforcement (see MNE Guidelines, Preface, Paragraph 5 (OECD, 2023[1])).
The MNE Guidelines provide that obeying domestic laws in the jurisdictions in which the enterprise operates and/or where they are domiciled is the first obligation of enterprises (see MNE Guidelines, Ch.1, Paragraph 2 (OECD, 2023[1])). Due diligence can help enterprises observe their legal obligations on matters pertaining to the RBC. In jurisdictions where domestic laws and regulations conflict with the principles and standards of the MNE Guidelines, due diligence can also help enterprises implement the MNE Guidelines to the fullest extent. Domestic law may also in some instances require an enterprise to take action on a specific RBC issue, (e.g., laws pertaining to specific RBC issues such as online risks to minors).
Due diligence expectations derived from or referencing the MNE Guidelines are increasingly being integrated into legal requirements. While this guidance may be helpful to enterprises and governments in better understanding how they could implement some of these legal requirements, it should not be relied on as a blueprint for compliance.
Business confidentiality
When implementing RBC due diligence, sufficient attention should be paid to commercial confidentiality, commercial secrets, commercially sensitive information and possible competition law prohibitions relating to the sharing of such information as well as information protected through intellectual property laws. While these are legitimate barriers to some aspects of disclosure, transparency and stakeholder engagement, enterprises are still expected to make good faith efforts to communicate and engage meaningfully with stakeholders while appropriately taking into account confidentiality, competition law, other relevant legal concerns, and in view of legitimate confidentiality considerations.
Implementing RBC in line with competition law
Collaborating with competitors or business relationships to support the implementation of RBC, including as part of sustainability initiatives, is subject to competition law (OECD, 2015[11]).
The MNE Guidelines affirm that “while enterprises and the collaborative initiatives in which they are involved should take proactive steps to understand competition law issues in their jurisdiction and avoid activities which could represent a breach of competition law, credible responsible business conduct initiatives are not inherently in tension with the purposes of competition law and typically collaboration in such initiatives will not be in breach of such laws” (see MNE Guidelines, Ch. X, para 121 (OECD, 2023[1])).
There are three broad practical actions that enterprises can consider in understanding issues related to cooperative activity and competition law:
Seeking the advice of competition authorities: enterprises can seek the advice of competition authorities if they are in doubt as to whether a particular conduct or cooperative activity can be viewed as contrary to competition law and therefore raise regulatory risks.
Practicing transparency: Authorities tend to be more sceptical of initiatives or agreements amongst competitors if conduct is completely private. Therefore, transparency regarding RBC initiatives can be a useful way of mitigating competition concerns. Importantly the simple fact that an agreement is overt or that there is transparency around an initiative does not shield it from the application of law if it is indeed anticompetitive. However, transparency can help bring to light potentially problematic issues and thus ensure they are addressed quickly.
Integrating RBC initiatives with compliance programmes: As enterprises have the responsibility to self-assess whether their conduct poses concerns under competition law they are encouraged to develop and implement compliance programs to ensure there is awareness of the risks and an understanding of how they should be managed at an organisational level. Most large enterprises will likely already have established anti-trust compliance programs in place which can be reference or adapted for the purpose of specific collaborative initiatives regarding RBC.
Meaningful stakeholder engagement
Meaningful9 stakeholder engagement, especially with workers, workers’ representatives and trade unions, affected communities, or other stakeholders who are most vulnerable to risks of adverse impacts, is essential for effective due diligence. Such engagement supports the development of trustworthy AI systems. Stakeholder engagement is an integral part of all of the steps of the due diligence framework. In some jurisdictions, stakeholder engagement may also be a right in and of itself (e.g., patient consent when applying AI in medical contexts, see also EU AI Act Article 61).
Meaningful stakeholder engagement can also have numerous benefits for enterprises, including through building trust and resilience to crises, and also stronger alignment with market and societal expectations. When stakeholders are involved throughout the due diligence process, they understand not just what decisions were made but why, creating procedural trust even when they might not agree with every choice. Early external feedback helps pivot approaches before significant resources are invested to address less significant risks, potentially reducing costs.
External perspectives might help identify underserved market segments or overlooked use cases. Direct engagement with end users of AI systems might also reveal actual needs rather than assumed ones. For example, enterprises developing and using AI systems in healthcare might consult with medical professionals and representatives of certain patient groups to discover that interpretability is more important than marginal accuracy improvements. AI systems developed with stakeholder input also face fewer barriers to adoption since key concerns have been addressed proactively.
Stakeholders can be impacted at multiple stages across the development and use of AI systems. This includes, for example, individuals whose private data or intellectual property (IP) are used to train AI systems, workers involved in data enrichment services in the development phase, and communities impacted by AI compute harms. Post-deployment, it can include, for example, workers being monitored by AI systems and individuals impacted by government services that use AI systems.
Stakeholder engagement involves interactive processes of engagement with relevant stakeholders through, for example, meetings, hearings or consultation proceedings. Relevant stakeholders are persons or groups, and/or their legitimate representatives, who have rights or interests related to the matters covered by this guidance that are or could be affected by adverse impacts associated with the enterprise’s development, deployment, operation, financing, sale, licensing, trade, and/or use of AI systems.
To be meaningful, stakeholder engagement should be two-way, conducted in good faith and responsive to stakeholders’ views. Stakeholders should be provided with timely, truthful and complete information and should be given an opportunity to provide input prior to major decisions being made that may affect them. Where appropriate, it is particularly important to have stakeholders actively participate in the identification of adverse impacts.
The rapid development and real-time modifications of AI systems may require enterprises to develop or adapt current practices to ensure meaningful stakeholder engagement. Stakeholder engagement should not be seen as a one-off event, but rather as a continuous process built into the AI system lifecycle and where relevant other aspects of the development and use of AI (e.g., when collecting data or during end-use). Practically, there are a number of ways in which enterprises may engage with stakeholders.10 Stakeholders can be involved:
as part of internal discussions about the product purposes and desired impact
as part of product design
as part of dataset curation and validation
as part of training and testing
ongoing during the use of the AI system
as part of post deployment testing and evaluation of AI systems
through multi-stakeholder initiatives and independent assessment processes
as part of regular trainings for workers in contexts where AI systems are used to manage worker activity. It is critical that workers are updated and informed about the capabilities and risks of AI systems regularly, so that they can meaningfully engage in discussions about the due diligence process.
In addition to any other mechanisms they may implement regarding stakeholder engagement, enterprises should respect the right of workers to establish or join trade unions and representative organisations of their own choosing, including by avoiding interfering with workers’ choice to establish or join a trade union or representative organisation of their own choosing (OECD, 2023, MNE Guidelines, Ch. V, para 1(a)).
In very large enterprises that may develop or use hundreds of AI systems, stakeholder engagement at multiple stages of development of each AI system might not be feasible. SMEs may also face resource and access challenges with engaging stakeholders. All enterprises may face challenges related to the speed of development of AI resulting in a constrained availability of relevant stakeholders to engage with enterprises. These challenges suggest that stakeholder engagement requires careful planning and that enterprises may choose different modalities of stakeholder engagement such as to approach it at a higher level, with the objective of transferring learnings to the product-level. Product-specific engagement (e.g., in the design or with impacted stakeholders) may only be practical for certain high-risk contexts.
Limited stakeholder literacy on AI and RBC issues might also present engagement challenges. When stakeholders understand an AI system’s capabilities, limitations, and potential consequences, they can make informed decisions and provide meaningful input on risk management processes. This knowledge empowers them to identify potential adverse impacts before they occur and advocate for responsible development and deployment practices. By investing in education and transparent communication about AI systems, enterprises can foster trust and encourage collaborative problem-solving, ultimately leading to a more efficient due diligence process.
Together enterprises and stakeholders are encouraged to identify methods for engagement that are feasible and effective for them. Enterprises should prioritise engaging with stakeholders, or their interlocutors, who are most likely to be affected by the activities of the enterprise. Special efforts should be made to engage with stakeholders who are the most vulnerable to risks of adverse impacts.
How to use this guidance
Copy link to How to use this guidanceThis guidance is intended to provide a framework to support implementation by enterprises of the MNE Guidelines, the RBC Guidance and OECD Recommendation on AI and should be used in conjunction with these standards, as well as other international, national and industry frameworks, initiatives and other sources of risk management information, particularly context-specific guidance that provide more detail on certain risks or use cases. Other sources of information are referenced throughout the document.
Acknowledging that relevant AI related regulation and voluntary frameworks differ between countries, enterprises can tailor their due diligence actions to their specific contexts and to the regulatory environments within which they operate. Additionally, compliance with these regulations or frameworks will often contribute towards observances of related provisions of this guidance.
Users of this guidance can first read through and understand the core framework provided and practical examples before turning to more context specific resources that are cited in the endnotes or modules accessible on the OECD.AI Catalogue of Tools & Metrics (OECD, n.d.[12]).
Notes
Copy link to Notes← 1. The Recommendation on AI identifies five complementary values-based principles relevant to all stakeholders and five recommendations to policymakers, referred together in this guidance as the “AI Principles”.
← 2. According to the MNE Guidelines Ch. I, paragraph 4, “A precise definition of multinational enterprises is not required for the purposes of the Guidelines. While the Guidelines allow for a broad approach in identifying which entities may be considered multinational enterprises for the purposes of the Guidelines, the international nature of an enterprise’s structure or activities and its commercial form, purpose, or activities are main factors to consider in this regard.”
← 3. An AI system lifecycle typically involves several phases that include to: plan and design; collect and process data; build model(s) and/or adapt existing model(s) to specific tasks; test, evaluate, verify and validate; make available for use/deploy; operate and monitor; and retire/decommission. These phases often take place in an iterative manner and are not necessarily sequential. The decision to retire an AI system from operation may occur at any point during the operation and monitoring phase. The AI Principles refer to enterprises in the AI system lifecycle as “AI actors” (OECD, 2024[2]).
← 4. The grouping based on activities was developed in the OECD report on Advancing accountability in AI (OECD, 2023[17]) and further detailed in a follow up draft report Draft mapping and consolidation of relevant actors, issues, and terminology for Responsible Business Conduct in AI [DSTI/CDEP/AIGO(2023)12], which was discussed at the November 2022 AIGO and WPRBC meetings.
← 5. For more information on addressing risks associated with raw materials, see the OECD Due Diligence Guidance for Responsible Supply Chains for Minerals (OECD, 2016[40]); for more on addressing risks related to electronics and vehicle manufacturing see the RBC Spotlight on Due Diligence in Electronics and Vehicle Manufacturing (OECD, 2025[39]).
← 6. The term ‘deploy’ is used differently in the AI Principles than in the EU AI Act. Under the AI Principles, deployment can be understood as making the AI system available for use. Under the EU AI Act, “deployer means a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity” (European Union, 2024[14]). The EU AI Act definition of deployer is more akin to what this guidance describes as Group 3: Users of the AI system.
← 7. NCPs have the mandate of furthering the effectiveness of the MNE Guidelines by undertaking promotional activities, handling enquiries and contributing to the resolution of issues that arise relating to the implementation of the MNE Guidelines in specific instances. Any individual or organisation can bring a specific instance (case) against an enterprise to the NCP where the enterprise is operating or based regarding the enterprise’s operations anywhere in the world. NCPs facilitate access to consensual and non-adversarial procedures, such as conciliation or mediation, to assist the parties in dealing with the issues. NCPs are required to issue final statements upon concluding the specific instance processes. NCPs can also make recommendations based on the circumstances of the specific instance. Organisations can refer to the Responsible Business Conduct OECD Guidelines for Multinational Enterprises (OECD, 2025[39]) for information on the NCP process, specific NCPs or cases.
← 8. The MNE Guidelines also note that “Enterprises should pay special attention to any particular adverse impacts on individuals, for example human rights defenders, who may be at heightened risk due to marginalisation, vulnerability or other circumstances, individually or as members of certain groups or populations, including Indigenous Peoples. OECD due diligence guidance, including the OECD Due Diligence Guidance on Responsible Business Conduct, the OECD Due Diligence Guidance on Meaningful Stakeholder Engagement in the Extractive Sector, and the OECD-FAO Guidance for Responsible Agricultural Supply Chains provides further practical guidance in this regard, including in relation to Free, Prior and Informed Consent (FPIC). United Nations instruments have elaborated on the rights of Indigenous Peoples (UN Declaration on the Rights of Indigenous Peoples).” (Commentary 45).
← 9. To be meaningful, stakeholder engagement should be two-way, conducted in good faith and responsive to stakeholders’ views. Stakeholders should be provided with truthful and complete information and should be given timely opportunity to provide input prior to major decisions being made that may affect them (see (OECD, 2018[3])).
← 10. For more information on how to meaningfully engage stakeholders when designing AI powered products and services, see the ECNL Framework for meaningful engagement of external stakeholders in AI development (European Center for Not-for-Profit Law, 2023[38]).