OECD creates expert group to foster trust in artificial intelligence

 

Artificial intelligence > OECD initiatives on AIOECD creates expert group to foster trust in artificial intelligence

Paris - 13 September 2018 


The Organisation for Economic Cooperation and Development has created an expert group (AIGO) to provide guidance in scoping principles for artificial intelligence in society. The formation of the group is the latest step in the organisation’s work on artificial intelligence to help governments, business, labour and the public maximise the benefits of AI and minimise its risks. 


The group is made up of experts from OECD member countries and think tanks, business, civil society and labour associations and other international organisations. In 2016 the OECD’s Committee on Digital Economy Policy began discussing the need for a Recommendation on AI principles by the OECD Council. The Committee decided in May 2018 to establish the expert group to scope AI principles that could be adopted in 2019.  See the list of expert group members (pdf)  

Watch chess legend Garry Kasparov, who was beaten by supercomputer Deep Blue in 1997, give his view on AI and why he believes the creation of the OECD group is a good move. Read more in his blog post


The expert group will hold its first meeting on 24-25 September at OECD headquarters in Paris. Following two further meetings, including one in January 2019 at the Massachusetts Institute of Technology in Cambridge, the group’s findings will help the Secretariat shape the Recommendation for the Council. 

 

“In the best traditions of the OECD, we are reaching out to a wide group of experts and thinkers to assist us in developing principles that will keep our countries competitive, guide the ethical progress of this fast-moving technology and share our knowledge with the broader world,” said Wonki Min, Vice Minister of Science and ICT of Korea who as the chair of the OECD’s Digital Economy Committee will head the expert group. 

Developing AI principles is a natural outgrowth of the OECD’s work over the past two years on the multidisciplinary “Going Digital” and “Next Production Revolution” projects, which are examining the broad impact of new technologies on society. Like those two projects, the AI principles will draw on expertise across the committees and directorates of the OECD under the coordination of the OECD Directorate for Science, Technology and Innovation

Along with working toward a Council Recommendation on AI principles, the OECD is planning to set up an OECD Policy Observatory on AI. The Observatory will bring together committees from across the OECD as well as a range of other stakeholders. The goal will be to identify promising AI applications, map their economic and social impact and share the information as widely as possible.

Nineteen countries around the world are represented on the AI expert group. They are joined by representatives from the European Commission, business and labour groups and outside groups like the Institute of Electrical and Electronics Engineers, Massachusetts Institute of Technology and Harvard’s Berkman Klein Center, and the French Institute for Research in Computer Science and Automation (INRIA). 

The expert group grew out of concepts debated at OECD events in 2016 and 2017, notably the conference titled “AI: Intelligent Machines, Smart Policies” of October 2017. Over two days of discussions, a consensus emerged that the far-reaching changes driven by AI systems offer dynamic opportunities for improving the public, economic and social sectors. More than 300 participants and 50 speakers focused on ways AI can make business more productive, improve government efficiency and address many of the world’s most pressing problems. 

At the conference and in subsequent discussions, attention was focused on the best ways government and tech companies can build public trust in AI systems, which is seen as essential to taking full advantage of AI’s potential. 

The diversity of representation on the expert group reflects the concept that AI’s global impact requires a global consensus. Many international organisations, from the European Union and United Nations to the G7 and G20, are debating aspects of AI’s impact on work and the economy. National governments, particularly among the 36 OECD member countries, are developing their own strategies. Tech companies and labour also are working toward common positions to deal with the impact of AI on future jobs, ethical guidelines and reducing bias. 


In keeping with the OECD's commitment to cooperation with developing countries, the experts are also expected to identify ways to ensure that the benefits of AI are shared as widely as possible and that global standards are developed to ensure trust in AI. 

Cyrus Hodes, an expert group member and adviser on AI to the United Arab Emirates, said he expects the group to take full advantage of the OECD’s influence. “As an international body researching and examining societies’ economic challenges to propose practical recommendations to policy makers, the OECD is uniquely positioned to bring together its set of in-house, member and partner countries’ experts to tackle the impact of fast moving emerging technologies, starting with rise of artificial intelligence,” said Hodes. “This provides an invaluable tool for policy makers to adapt, embrace the upsides while mitigating risks of such powerful systems.” 

Another member of the group, Carolyn Nguyen, director of technology policy at Microsoft, emphasised the need for wide availability of AI to people from both developed and developing countries, based on an ethical framework for trustworthy AI that includes principles of ‘Human-centered AI’, safety, fairness, transparency, privacy and inclusiveness. 

Another important aspect of taking advantage of AI within the boundaries of digital security and human rights is striking a balance between innovation and guiding principles. Professor Osamu Sudoh of the University of Tokyo said it is essential to make sure that AI research and development activities are not hindered and that developers do not face excessive burdens that could hinder progress. 

At the same time, concerns have been expressed about the need for transparency and accountability in developing and deploying AI systems. “Artificial intelligence must put people and planet first,” said Christina Colclough, director of digitalisation and trade at UNI Global Union, which represents workers. “Ethical AI discussions on a global scale are essential to guarantee a widespread, implemented and transparent solution.” 

Achieving the right balance -- and weighing the benefits of AI and mitigating the risks -- requires the kind of open-minded debate at the heart of the expert group’s task and the overall efforts of the OECD to develop and share principles for AI in society.  

 

See also

 

Related Documents