Development and use of technologies can lead businesses to cause or contribute to human rights and other social and environmental harms. For example, bias in datasets and machine learning can lead to discriminatory hiring practices, while algorithms used in online platforms can manipulate consumers and voters through enhancing the spread of dis-information, not to mention technology enabling privacy infringement, such as widespread surveillance and invasive monitoring of worker activity.
The risks related to the development and use of artificial intelligence (AI) in particular has grabbed the attention of businesses and policymakers in recent years. In response to these concerns, there is a growing emphasis on responsible and trustworthy AI, including the advancement of ethical frameworks, guidelines, and standards for AI development and use.
The OECD has developed the first internationally agreed, government-backed Due Diligence Guidance for Responsible AI. Backed by all the OECD’s member countries, plus 17 partner governments and the EU, this Guidance helps enterprises navigate the complex terrain of AI risk management. It is designed to help businesses ensure that the AI systems they develop are trustworthy, used and developed safely and responsibly, and aligned with broad societal values.