Privacy-enhancing technologies (PETs) are critical tools for building trust in the collaborative development and sharing of artificial intelligence (AI) models while protecting privacy, intellectual property, and sensitive information. This report identifies two key types of PET use cases. The first is enhancing the performance of AI models through confidential and minimal use of input data, with technologies like trusted execution environments, federated learning, and secure multi-party computation. The second is enabling the confidential co-creation and sharing of AI models using tools such as differential privacy, trusted execution environments, and homomorphic encryption. PETs can reduce the need for additional data collection, facilitate data-sharing partnerships, and help address risks in AI governance. However, they are not silver bullets. While combining different PETs can help compensate for their individual limitations, balancing utility, efficiency, and usability remains challenging. Governments and regulators can encourage PET adoption through policies, including guidance, regulatory sandboxes, and R&D support, which would help build sustainable PET markets and promote trustworthy AI innovation.
Sharing trustworthy AI models with privacy‑enhancing technologies
Policy paper
Share
Facebook
Twitter
LinkedIn
Abstract
In the same series
-
Working paper
Evidence from selected countries and the European Union
7 May 202658 Pages -
Working paper
Global linkages and the cross‑country distribution of the gains from AI
18 March 202679 Pages -
Working paper
International insights and policy considerations for Italy
11 December 2025100 Pages -
8 December 202543 Pages
Related publications
-
Working paper
Insights from responses to the reporting framework of the Hiroshima AI Process Code of Conduct
25 September 202537 Pages -
Policy paper3 June 202546 Pages
-
28 February 202526 Pages
-
9 February 202548 Pages
-
16 December 202419 Pages