Artificial intelligence

How can we ensure that AI benefits society as a whole?

Artificial intelligence (AI) is transforming every aspect of our lives. It influences how we work and play. It promises to help solve global challenges like climate change and access to quality medical care. Yet AI also brings real challenges for governments and citizens alike.

As it permeates economies and societies, what sort of policy and institutional frameworks should guide AI design and use, and how can we ensure that it benefits society as a whole?

The OECD supports governments by measuring and analysing the economic and social impacts of AI technologies and applications, and engaging with all stakeholders to identify good practices for public policy.

OECD.AI Policy Observatory

Policies, data and analysis for trustworthy artificial intelligence

The OECD AI Policy Observatory (OECD.AI) combines resources from across the OECD and its partners from all stakeholder groups. It facilitates dialogue and provides multidisciplinary, evidence-based policy analysis and data on AI’s areas of impact. It is a unique source of real-time information, analysis and dialogue designed to shape and share AI policies across the globe.

Its country dashboards allow you to browse and compare hundreds of AI policy initiatives in over 60 countries and territories. The Observatory also hosts the AI Wonk blog, a space where the OECD Network of Experts on AI and guest contributors share their experiences and research.

Artificial Intelligence: OECD Principles

How governments and other actors can shape a human-centric approach to trustworthy AI

The OECD Principles on Artificial Intelligence promote AI that is innovative and trustworthy and that respects human rights and democratic values. They were adopted in May 2019 by OECD member countries when they approved the OECD Council Recommendation on Artificial Intelligence.

The OECD AI Principles are the first such principles signed up to by governments. They include concrete recommendations for public policy and strategy, and their general scope ensures they can be applied to AI developments around the world.

The Principles were updated in 2024 in response to recent developments in AI technologies, notably the emergence of general-purpose and generative AI. The updated Principles more directly address AI-associated challenges involving privacy, intellectual property rights, safety, and information integrity.

The OECD's updated definition of AI systems

Obtaining consensus on a definition for an AI system in any sector or group of experts has proven to be a complicated task.

However, if governments are to legislate and regulate AI, they need a definition to act as a foundation. Given the global nature of AI, if all governments can agree on the same definition, it allows for interoperability across jurisdictions.

Recently, OECD member countries approved a revised version of the Organisation’s definition of an AI system.

OECD Framework for the Classification of AI Systems

A tool for effective AI policies

Developed by the OECD.AI Network of Experts, the OECD framework for classifying AI systems aims to help policy makers, regulators, legislators and others to assess the opportunities and risks that different types of AI systems present, to inform their AI strategies and ensure policy consistency across borders.

The Framework is a user-friendly tool that links the technical characteristics of AI with policy implications, based on the OECD AI Principles that promote values such as fairness, transparency, safety and accountability and policies such as building human capacity and fostering international cooperation.

The OECD AI Incidents Monitor

An evidence base for effective AI policy

While AI offers tremendous benefits, some of its uses produce dangerous results that can harm individuals, businesses and societies. These negative outcomes, captured under the umbrella term “AI incidents”, are diverse in nature and happen across sectors and industries.

The OECD AI Incidents Monitor (AIM) documents AI incidents to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the incidents and hazards that concretise AI risks. Over time, AIM will help to show patterns and establish a collective understanding of AI incidents and their multifaceted nature and serve as an important tool for trustworthy AI.

Recent publications

Recent events

Related links