OECD initiatives on AI


Artificial intelligence > OECD initiatives on AI


Artificial intelligence is rapidly permeating economies and societies and promises to generate productivity gains, improve efficiency and lower costs. It is leading to calls to facilitate the adoption of AI systems to promote innovation and growth, to help address global challenges, and boost jobs and skills development, while at the same time establishing appropriate safeguards to ensure that AI systems benefit people broadly. Areas of focus include transparency, respect of human rights, non-discrimination, privacy and control, and safety and security.


The OECD has been working on AI for several years. Some recent milestones include:

  • April 2016: G7 ICT Ministers agreed in Takamatsu, Japan, to start an international discussion on the envisaged rapid development of AI.
  • November 2016: The OECD Committee on Digital Economy Policy (CDEP) held a Technology Foresight Forum on AI that looked at the benefits and challenges brought by the development of AI.
  • September 2017: G7 ICT and Industry Ministers agreed in Turin, Italy, to "look forward to further multistakeholder dialogue and to advancing our understanding of A.I. cooperation, supported by the OECD."
  • October 2017: The OECD conference on AI: Intelligent Machines - Smart Policies brought together over 300 technologists, senior policy makers and representatives of civil society, labour and business. There was universal agreement that AI already provides beneficial applications that are used every day by people worldwide. Going forward, the issue is to ensure that the development and uses of AI systems are guided by principles that promote well-being and prosperity while protecting individual rights and democracy.
  • March 2018: G7 Innovation Ministers agreed in Montreal, Canada, to "facilitate multistakeholder dialogue and collaboration on artificial intelligence to inform future policy discussions by G7 governments, supported by the OECD in its multistakeholder convener role".
  • September 2018: OECD creates an expert group to foster trust in artificial intelligence.
  • February 2019: OECD expert group on AI identifies principles for the responsible stewardship of trustworthy AI and proposes specific recommendations for national policies to implement the principles.

Currently, we are putting significant efforts into work on mapping the economic and social impacts of AI technologies and applications and their policy implications. This includes improving the measurement of AI and its impacts, as well as shedding light on important policy issues such as labour market developments and skills for the digital age, privacy, accountability of AI-powered decisions, and the responsibility, security and safety questions that AI generates.

Analysis and measurement

The OECD supports governments through its core capabilities of policy analysis, dialogue and engagement, identification of best practices, and capacity building. The OECD’s analytical work is exploring questions such as:

  • How can innovation ecosystems and regulatory frameworks foster the development of AI?
  • How can the opportunities offered by AI be harnessed to support better lives?
  • How can firms, including SMEs, be enabled to navigate the AI transition?
  • How can we ensure that AI does not exacerbate inequality?
  • How can AI improve competitiveness, innovation and sustainable growth?
  • How can governments and businesses use AI to provide citizens with better services?
  • How can citizens, educators and businesses be prepared for the jobs of the future, while minimising the negative impacts of the transition?
  • How can biases be mitigated in the use of AI to ensure that it serves all?
  • How can AI systems be made secure, safe, transparent and accountable?

AI policy observatory

Working with committees across the OECD and a wide spectrum of external actors, the OECD AI Policy Observatory to be launched in 2019 will provide insights on public policies to ensure AI’s beneficial use:

Across government. The Observatory will work with committees across the OECD, leveraging the multidisciplinary and cross-cutting expertise of the OECD to be a center for evidence collection, debate and guidance for governments on how to ensure the beneficial use of AI. It is an outcome of the OECD’s broader Going Digital Project that brings together multiple policy domains.

Engaging all stakeholder groups.  The OECD AI Policy Observatory will engage a wide spectrum of actors from different stakeholder groups. The rapid pace of AI research and speed of deployment shrinks the time frame and distinction between AI research and its impact on economies and societies. There is uncertainty about the future speed and scale of the transition, and complex questions arise around legal, ethical, cultural and technical facets of AI. These factors underscore the need for robust and timely engagement between government, industry, policy and technical experts and the public.

back to top

Expert group on AI in society (AIGO)


In May 2018, the OECD’s Committee on Digital Economy Policy (CDEP) established an Expert Group on Artificial Intelligence in Society (AIGO) to scope principles for public policy and international cooperation that would foster trust in and adoption of AI and that could form the basis of a Recommendation of the OECD Council in the course of 2019. In the same spirit, the 2018 Ministerial Council Meeting Chair’s statement urged “the OECD to pursue multistakeholder discussions on the possible development of principles that should underpin the development and ethical application of artificial intelligence in the service of people”.


The Expert Group consisted of over 50 experts from different sectors and disciplines, including governments, business, the technical community, labour and civil society, as well as the European Commission and UNESCO. It held four meetings: two at the OECD in Paris, on 24-25 September and 12 November 2018, one at MIT in Boston on 16-17 January 2019, and a fourth and final meeting in Dubai, on 8-9 February 2019, on the margins of the World Government Summit. It has identified principles for the responsible stewardship of trustworthy AI that are relevant for all stakeholders, such as respect for human rights, fairness, transparency and explainability, robustness and safety, and accountability. The expert group has also proposed specific recommendations for national policies to implement the principles. This work will inform the development of an OECD Council Recommendation on artificial intelligence.




back to top | OECD AI home


Related Documents