This chapter outlines key policy considerations for the development of regulatory and supervisory initiatives aimed at supporting the safe and broad diffusion of AI in Italian financial markets. It draws on evidence collected through the OECD project survey, cross-country analysis providing comparative evidence from EU member states and other OECD Member countries, OECD analytical work, and insights gathered through engagement with industry stakeholders and Italian financial authorities. The policy considerations are organised into eight areas and are intended to help harness the potential of AI to foster more efficient and inclusive financial markets, strengthen the competitiveness of Italy’s economy, in turn contributing to EU competitiveness, and maintain a high level of consumer protection. While most considerations are directed at the Italian financial authorities, some relate to ongoing EU-level regulatory and supervisory initiatives.
Artificial Intelligence in Italian Financial Markets
3. Policy considerations
Copy link to 3. Policy considerationsAbstract
3.1. Overview of key policy considerations
Copy link to 3.1. Overview of key policy considerationsThe policy considerations are grouped into eight areas, which are inter-linked, as outlined in Table 3.1. Some policy considerations are related to the ongoing EU‑level regulatory initiatives, and their implementation will be subject to the evolution of the EU legal framework. As the eight themes are closely interlinked, they warrant consideration through an integrated policy approach.
Table 3.1. Summary of policy considerations
Copy link to Table 3.1. Summary of policy considerations|
# |
Policy consideration |
Authorities responsible |
Timeframe |
Linked themes |
|
|---|---|---|---|---|---|
|
|
Theme 1: Strengthen recurring data collection on AI adoption and exposure |
||||
|
1 |
Closer co‑ordination of data collection initiatives on AI adoption (e.g. common definitions, taxonomy) |
Italian & EU |
Short term |
Theme 2 Theme 8 |
|
|
2 |
Consider conducting a joint industry-wide data collection exercise |
Italian |
Medium/long term |
||
|
3 |
Promote convergence of data collection efforts at the EU level |
EU |
Short/medium/long term |
||
|
|
Theme 2: Promote and support clarity and simplification of the regulatory and supervisory framework |
||||
|
4 |
Promote supervisory guidance, in co‑operation with ESAs |
Italian & EU |
Short/medium term |
Theme 1 Theme 4 |
|
|
5 |
Clarify supervisory expectations for supervised entities (public-facing) |
Italian & EU |
Short/medium term |
||
|
6 |
Support compliance with data governance frameworks |
Italian |
Medium/long term |
||
|
7 |
Enhance co‑operation with non-financial authorities |
Italian |
Short/medium-term |
||
|
|
Theme 3: Encourage stronger AI governance arrangements for supervised entities |
||||
|
8 |
Support efforts to promote stronger governance structures |
Italian |
Short/medium term |
Theme 2 Theme 7 Theme 8 |
|
|
9 |
Provide high-level, cross-sectoral assistance with the development of AI governance frameworks |
Italian |
Short/medium term |
||
|
10 |
Assist firms in the governance of non-critical third-party arrangements |
Italian |
Short/medium term |
||
|
11 |
Promote the use of explainability methods depending on the level of use‑case materiality |
Italian |
Short/medium term |
||
|
12 |
Promote AI-specific financial consumer protection and literacy |
Italian |
Short/medium term |
||
|
13 |
Promote robustness of AI cyber-resilience frameworks |
Italian |
Short/medium term |
||
|
|
Theme 4: Promote safe data-sharing frameworks and practices |
||||
|
14 |
Foster safe data-sharing frameworks, operationalise open-finance technical standards and promote other data-sharing frameworks |
Italian |
Medium/long term |
Theme 2 |
|
|
15 |
Promote participation of financial firms in the EU common data spaces and contribute high-quality public-sector datasets where legally possible |
Italian |
Medium/long term |
||
|
16 |
Promote safe data-sharing practices |
Italian |
Medium/long term |
||
|
|
Theme 5: Foster and support public-private co‑operation |
||||
|
17 |
Increase AI collaboration between the industry and the public sector (e.g. multi-stakeholder forums, thematic working groups, joint testing frameworks) |
Italian |
Short/medium term |
Theme 6 Theme 7 |
|
|
|
Theme 6: Highlight and enhance the role of innovation facilitators |
||||
|
18 |
Promote the existing national-level innovation facilitator ecosystem |
Italian |
Short/medium term |
Theme 5 Theme 7 |
|
|
19 |
Enhance the innovation facilitator ecosystem, including encouraging smaller firms to participate, including non-supervised (e.g. FinTech start-ups) |
Italian |
Short/medium term |
||
|
20 |
Increase the integration of national and EU and national-level innovation facilitators |
Italian & EU |
Medium/long term |
||
|
|
Theme 7: Support whole‑of-government public sector strategic direction for the AI development and use in the finance sector |
||||
|
21 |
Foster stronger collaboration across industry, academia and authorities |
Italian |
Short/medium term |
Theme 5 Theme 6 |
|
|
22 |
Leverage existing centres of excellence and AI factories |
Italian |
Short/medium term |
||
|
|
Theme 8: Strengthen supervisory capacity |
||||
|
23 |
Enhance capacity for authorities at the national and EU level |
Italian & EU |
Short/medium term |
Theme 5 |
|
|
24 |
Consider enhanced sharing of AI-driven SupTech tools at the EU level |
EU |
Short/medium term |
||
3.1.1. Strengthen recurring data collection on AI adoption trends
Information gaps on AI adoption by financial institutions remain a common challenge across OECD economies (OECD, 2026[1]). The objective is to build over time a harmonised, consistent and recurring system for collecting granular data on AI adoption in the financial sector, which would enable Italian and EU authorities to close information gaps, improve comparability across institutions and sectors, and strengthen their ability to monitor risks and support safe AI innovation, using a robust, methodologically-aligned and evidence‑based approach to support co‑ordinated policy design and implementation. In addition to enhancing regulatory effectiveness, greater harmonisation also aims to reduce reporting burdens for supervised entities (particularly those operating across several sub-sectors) and promote simplification.
Building on their considerable expertise, Italian financial supervisory authorities could consider enhancing existing data collection frameworks to capture AI adoption and experimentation at a more granular level, by incorporating new AI adoption metrics into existing data collection exercises, drawing from the 2025 OECD project survey and risk-related indicators proposed by the FSB, as well as measures capturing perceived barriers to the safe adoption of AI innovation.
Closer co‑ordination of Italian authorities’ data collection initiatives could involve, as a first step, efforts to harmonise definitions and taxonomies related to AI innovation. In the longer term, a harmonised cross-sectoral approach could promote consistency and comparability while also alleviating the burden associated with multiple surveys from supervised entities operating across multiple sub-sectors. This could include jointly co‑ordinated, industry-wide data collection exercises, possibly on a recurring basis, taking into account sectoral differences (e.g. scope, priorities). Guidance agreed at the EU level will play a key role in strengthening comparability and reducing reporting burdens.
At the EU level, there is a need to foster greater convergence among European Supervisory Authorities’ (ESAs) data collection efforts to promote coherence and consistency across respective surveys, including efforts by the Single Supervisory Mechanism (SSM). Closer co‑ordination of data collection exercises across sectors at the EU level would help reduce the burden on supervised entities that are required to respond to multiple, often divergent, requests, while also improving the quality of the data collected. Efforts for harmonisation of terminology and methodological approaches at the EU level can increase the comparability of data collected across sectors domestically and support national efforts for taxonomy harmonisation.
In this regard, Italian supervisory authorities are encouraged to continue playing an active role in EU initiatives by contributing to data collection exercises, leveraging on insights from the project survey, and sharing all domestically identified insights. High quality insights from EU member states in surveys conducted by EU authorities enable timelier EU-level responses that account for national differences, without overly burdening supervised entities.
3.1.2. Promote and support clarity and simplification of the regulatory/supervisory framework
Lack of clarity and alignment in regulatory requirements and supervisory expectations applicable to AI in finance was identified as the most significant constraint to AI deployment by the Italian finance sector. The objective is to promote a clearer, more coherent and simplified regulatory and supervisory framework for AI in finance, aiming to strengthen effective oversight, reduce regulatory uncertainty and ensure clear and consistent supervisory expectations across the EU, while also safeguarding fundamental rights and financial consumer protection. This would, in turn, allow financial institutions to navigate compliance confidently, enabling them to scale up AI investment and promote the wider safe diffusion of AI in the finance sector, ultimately promoting greater competitiveness of the European financial sector in AI innovation.
At the EU level, authorities should continue to pursue the ongoing simplification agenda and strengthen efforts underway to address perceived uncertainties arising from newly enacted AI legislation and its overlay with existing sectoral and other applicable rules. Supervisory guidance could help mitigate perceived ambiguity by financial firms, and provide the legal certainty needed to further invest in AI innovation. Well-designed supervisory guidance would assist market participants in navigating compliance obligations, reducing perceived regulatory uncertainty while facilitating more effective oversight. Any guidance should be carefully calibrated to avoid negatively affecting AI adoption by impeding firms’ ability to experiment with AI innovation, while also preserving the objective of providing EU citizens with the highest standard of protection of their fundamental rights. A risk-based approach should be pursued, proportionate to the risks and impacts of specific AI use cases, while highly prescriptive approaches should be avoided given the rapid pace of AI innovation. Guidance could take different forms (e.g. the release of clarifications, interpretative notes or supervisory expectations and communications) or could be provided through closer engagement with the industry (OECD, 2026[1]). In this context, stakeholders could also benefit from further clarification on the exclusions from the regulatory definition of AI of simple statistical techniques such as linear or logistic regression that capture predictive applications, as well as on systems used for mathematical optimisation purposes.
Italian supervisory authorities are encouraged to continue playing an active role in EU-level initiatives aimed at simplifying the regulatory framework for AI, with particular attention to the needs and characteristics of the financial sector. Close co‑ordination among domestic authorities and convergence at the EU level would help prevent regulatory fragmentation or mismatched implementation timelines, while allowing for domestic concerns to be reflected at the EU level. Equally important is the role of Italian authorities in effectively communicating the outcomes of EU clarification efforts to supervised entities domestically. Imposing unnecessarily stricter national requirements beyond those envisaged in EU rules should be avoided as these could create fragmentation, increase compliance burdens and undermine the domestic sector’s competitiveness.
One area of particular focus is data governance and management frameworks, as compliance with data protection requirements was identified as a significant barrier to AI deployment in Italian financial markets, with emphasis on the perceived complexity of compliance with General Data Protection Regulation (GDPR). Given that data protection falls largely outside the remit and direct control of Italian financial authorities, these should consider efforts to strengthen co‑operation, co‑ordination and information‑sharing with national and EU data protection authorities (DPAs), as well as efforts to communicate any clarifications on these issues to supervised entities. Italian financial authorities may thus consider issuing public-facing high-level clarifications on practical aspects of AI experimentation and deployment by the financial sector, including on the treatment of data for training and testing. These should be closely aligned with the principles and objectives established or under development at the EU level, and in close co‑ordination with EU authorities, to support the harmonised domestic implementation of EU-level outcomes, as well as with domestic non-financial authorities (e.g. cybersecurity agency, data protection office). This can also include further clarity on the treatment of non‑GDPR‑governed data used for training and testing (e.g. non-personal data, publicly available data, synthetic datasets), under reduced protection but with implications for lawful processing, data minimisation, traceability, and model risk, including bias propagation and unintended personal data inference.
At the EU level, the European Commission’s Digital Omnibus Package’ makes constructive progress toward the streamlining of regulatory requirements related to AI and data governance, aiming to bring considerable administrative relief. EU data protection bodies, namely the European Data Protection Board (EDPB), are actively monitoring the need to issue clarifications regarding the applicability of the requirements to AI model deployment. Clarification of what constitutes legitimate interest as a legal basis for data collection could be particularly helpful for the training of AI models in finance.
The realm of data protection may exemplify one of the fields in which the co‑operation between Italian financial supervisory authorities, which is already at a very advanced working level, may be further extended to incorporate other non-financial authorities. A more harmonised framework for interacting with data protection authorities and cybersecurity agencies, among others, may be crucial for clarifying challenges encountered by financial sector participants regarding AI deployment. Italian authorities should also consider exploring ways to enhance co‑operation, co‑ordination, and information exchange with EU DPAs, as well as other relevant EU non-financial authorities, each acting within its respective mandate.
3.1.3. Encourage stronger AI governance arrangements for supervised entities
Project survey results indicate that Italian firms are currently adopting a wide range of governance approaches. Effective governance can enable safe, responsible AI innovation; increase stakeholder trust; promote wider uptake of AI innovation; safeguard firms and customers; and promote financial stability. The objective is to strengthen AI governance by ensuring boards and senior management establish robust, risk-proportionate oversight of AI systems. Core elements of AI governance efforts are robust AI cyber-resilience frameworks and cross-sectoral co‑operation on third-party oversight in line with the EU DORA Regulation.
Italian authorities should support the strengthening of the organisational governance of AI in supervised entities, as an integral part of their overall corporate governance framework, with ultimate responsibility lying with the Board of Directors. In line with applicable regulation, Italian authorities may encourage the Boards of supervised entities that plan to deploy AI as part of their ordinary business operations to establish effective strategies for the development and management of AI and to define robust policies for governing and controlling the AI-related risks (including operational, legal and reputational risks). The Boards should also periodically assess the contribution of AI to the entity’s performance and verify that the AI-related risks are adequately monitored and managed by senior management. Efforts should be also made to ensure that robust data and model governance frameworks for AI systems are implemented with appropriate human oversight and validation across all segments of the financial system.
Italian authorities could consider supporting stronger governance structures through close engagement with supervised entities or through high-level, cross-sectoral guidance that takes a risk-based approach. Existing governance processes could be enhanced for AI usage with adaptations proportional to the risk and materiality of use cases implemented in line with a risk-based proportional approach. Particular attention should be paid to methods and measures for ongoing assessment of model reliability and robustness by supervised entities, and their oversight by financial authorities. Effective governance should encompass at a minimum human oversight, risk management, safety, security and accountability in line with what is prescribed by the OECD AI Principles, the first intergovernmental standard on AI (OECD, 2019[2]).
The results of the survey point to a heavy reliance on a concentrated number of third-party vendors for AI-related services in Italy. Emphasis could therefore be placed on the governance of non-critical third parties supporting critical or important functions that involve AI deployment, in particular through promoting cross-sectoral co‑operation on third-party oversight. Closer engagement and potential guidance could help firms increase transparency around these arrangements and manage associated risks.
Project survey respondents indicated that the adoption of certain AI models is often limited due to explainability considerations. Italian authorities should therefore consider promoting further use of explainability methods for AI models in a proportionate manner, depending on the use case.1 As there is no single “correct” approach to explainability, guidance and support should aim to facilitate seamless integration, in line with a principles-, risk-based approach, reflecting the level of materiality of different AI use cases and their potential impacts for firms, customers, and markets. The scope and timing of such efforts should remain within the discretion of the competent MSAs, based on observed market practices, evidence emerging from supervisory experience, and the evolving state of AI technology.
Importantly, both Italian authorities and financial service providers should promote consumer AI literacy in financial services, to complement financial consumer protection, enable safer online behaviour and strengthen trust in digital finance more broadly.
Almost half of Project survey respondents have not implemented any specific safeguards to address the emerging AI-specific cyber threats. Italian authorities should therefore highlight the critical importance of robust cyber-resilience frameworks addressing AI-related risks and promote the reinforcement of cyber-specific preparedness by supervised entities related to AI adoption in line with DORA requirements. Continued co‑ordination with cybersecurity agencies, both domestically and at the EU level, will be essential to ensure that firms adopt effective AI-focused cyber-resilience frameworks. To support this, existing operational protocols for information exchange and joint incident reporting among financial and cybersecurity authorities could be integrated to address AI-specific threats, such as adversarial attacks and model vulnerabilities to promote timely responses while minimising duplication of efforts.
Italian authorities can encourage the use of AI-based tools to strengthen cybersecurity and operational resilience across the Italian financial sector, particularly for FMIs, in a proportionate, non-prescriptive manner, on a voluntary basis. This could include the systematic inclusion of AI-related risk scenarios within existing operational resilience and cyber-testing frameworks, such as DORA and TIBER-EU, with particular attention to FMIs. The promotion of the development of a national guidance and reference taxonomy for AI-related incidents, accompanied by a concrete reporting reference framework, could also be envisaged for the structured classification of AI incidents. This could enable firms and authorities to aggregate, analyse, and reuse incident information already collected under existing frameworks, in alignment with, and building on, existing DORA incident reporting categories and processes.
3.1.4. Promote safe data-sharing frameworks and practices
Data-sharing frameworks, such as Open Finance, provide the foundational infrastructure and critical data flows necessary to enable greater interoperability across the financial sector, serving as a key enabler for the effective deployment of AI in finance for certain use cases (OECD, 2026[3]). The objective is to enable safe, trustworthy and innovation‑enhancing data sharing across the financial sector that can support responsible AI training and validation. Ultimately, the aim is to support secure, standardised and interoperable data-sharing frameworks that ensure privacy‑preserving data flows, and provide firms with reliable, high‑quality datasets that can be used in AI systems. By fostering EU‑wide interoperability of regional data-sharing frameworks, Italian authorities can help reinforce the competitiveness of the EU financial ecosystem. This policy consideration also aims to support the objectives of the Savings and Investment Union (SIU) in facilitating secure data interoperability, deepening market integration across the Union and enhancing cross-border investment.
Currently, the open finance framework is awaiting finalisation at the EU level, with the FiDA proposal introducing a cross-sectoral right to access and share data through common interfaces, supported by consent management and liability rules. Complementing this, the SIU strategy positions interoperability as a driver of competitiveness, aiming to reduce cross-border frictions and broaden retail access to investment markets. Italian authorities may take advantage of this policy momentum by enhancing the cross-sectoral collaboration with other authorities, along with the industry consultations, to conceptualise how to support data-sharing frameworks in the financial sector, with a view to fostering AI innovation. These initiatives may also encompass discussions to raise awareness concerning data sharing opportunities among relevant stakeholders, along with relevant practices, methods and measures for secure data exchanges and interoperable architectures, such as the use of harmonised APIs, uniform data formats and auditable consent mechanisms.
Italian authorities should also consider conceptualising areas of contribution to the Common European Data Spaces (CEDS), which aim to create trusted environments for data exchange across sectors, including finance, under clear governance, privacy-preserving infrastructure and interoperable standards. The European Financial Data Space (EFDS), now in development, will enable secure sharing of financial data to support innovation and open finance, while initiatives like the Data Spaces Support Centre and SEMIC provide technical and semantic tools for identity, consent, and traceability (Data Spaces Support Centre, 2025[4]; European Commission, 2025[5]). Complementary strategies such as the European Data Union and Gaia‑X reinforce this architecture with compliance automation, high-value datasets, and federated cloud frameworks, giving firms access to curated, standardised data and reducing integration costs (European Commission, 2025[6]; Gaia-X, 2023[7]). For Italy, participation in these ecosystems could unlock reliable datasets for AI, strengthen legal certainty, and promote scalable, consent-based data reuse under EU law. While the exact modalities of these environments are still being finalised, Italian authorities could conceptualise how to promote participation in EU common data spaces (e.g. EFDS, Gaia-X) by providing guidance and contributing high-quality public-sector datasets where legally possible. This could take the form of engaging with industry players to promote participation of firms in the EU common data spaces, along with other methods of enhancing industry access to safe data exchange platforms. Data-sharing initiatives may also be promoted through innovation facilitators, involving participation by market and technology providers and allowing for exploring business and consumer benefits of data sharing (e.g. new products, improved data protection), while addressing risks observed in the sharing of financial data outside structured frameworks.
3.1.5. Foster and support public-private co‑operation
Stronger interaction of the authorities with the industry can deepen supervisory understanding of the practical deployment of AI innovation and the operational contexts of such technologies, while also enhancing the capacity of authorities to identify and address emerging risks in a timely and well-informed manner (OECD, 2026[1]). Close and sustained engagement with the industry can also yield significant benefits for supervised entities, improving the authorities’ understanding of any challenges encountered by supervised firms in their compliance efforts (OECD, 2026[1]). The objective is to strengthen public‑private co‑operation to foster responsible AI innovation and support the competitiveness of the EU financial industry, by enabling regulators and industry to build mutual understanding; improving supervisory insights; creating a more transparent, trusted and well‑informed AI ecosystem; while also safeguarding financial consumer protection and fostering financial literacy. Such efforts can also contribute to more robust and transparent AI governance ecosystems across the financial sector.
Italian authorities should continue promoting closer co‑operation and engagement with the industry, supporting innovation while advancing supervisory oversight objectives. Such initiatives could take several forms, including multi-stakeholder forums, thematic working groups, or offering testing grounds for digital innovation architectures. Italian authorities should build on their existing and ongoing efforts at national and EU level (e.g. Financial Computer Emergency Response Team (CERTfin), Milano Hub, Fintech Channel) as well as on traditional supervisory activities, such as on-site inspections, thematic assessments, and systematic data gathering, to ensure that financial institutions remain compliant with regulatory standards, adequately manage risk, and uphold market integrity (OECD, 2026[1]).
Italian authorities should also consider novel ways of proactive engagement with industry stakeholders beyond the standard supervisory activities as a way to foster mutual understanding. Examples of initiatives for stronger engagement between Italian authorities and the industry could include model testing to support model validation or public-private AI forums to deepen discussions on key issues (OECD, 2026[1]). Testing of AI models offers a practical way to build trust and transparency, while allowing supervisors to observe model behaviour and firms to receive early feedback on supervisory expectations. Public-private forums can foster a common understanding of expectations and standards, enhance accountability, and support proportional oversight.
3.1.6. Highlight and enhance the role of innovation facilitators
The core objective of this policy consideration is to strengthen and better integrate Italy’s innovation‑facilitation ecosystem so that financial institutions can safely experiment with AI, leverage expertise and participate in EU‑aligned testing environments. Ultimately, it aims to expand safe, scalable, and inclusive AI experimentation, especially for smaller firms, while ensuring coherence with EU frameworks and enhancing cross‑border collaboration. Such a safe experimentation framework can play a key role in expanding the number of AI use cases in production, thereby enhancing the innovation and competitiveness in the field.2
Italy benefits from a well-developed ecosystem of innovation facilitators spanning all major segments of financial activity. Existing facilitators already enable safe testing of AI applications in finance and foster constructive engagement with the industry. Italian authorities could build on existing facilitator arrangements to further enhance their impact: first, by considering fostering access to high-performance computing resources for participants in facilitators; second, by improving data accessibility through the possible sharing of datasets to support safe model testing by financial firms; third, by facilitating access to technical expertise, training and upskilling in domains related to AI development for financial applications; and fourth, by broadening participation in facilitators by encouraging greater involvement of smaller firms in such initiatives, for example through raising awareness of their role. Such widened participation could be encouraged by identifying specific initiatives for smaller firms and setting up networking opportunities among AI market participants. Italian authorities could also consider establishing a dialogue with AI research centres (e.g. AI factories) to address AI capabilities gaps for SMEs. In this respect, Milano Hub could strengthen its role by organising workshops, seminars and masterclasses for the innovation facilitators community on relevant specific themes in order to foster interactions and debate at national level.
Italian authorities could view ongoing regulatory discussions at the EU level as a strategic opportunity to improve the integration between the national innovation facilitators and any EU-level initiatives. Such alignment could boost their effectiveness and encourage wider market participation, especially among financial firms operating across EU member states. Italian authorities could also further contribute to any EU-led efforts for cross-border testing within the EU. The draft implementing act also encourages the involvement of other actors within the sandbox, such as research labs and civil society organisations. Beyond sandboxes, Italian authorities might explore additional avenues to foster innovation and upskilling, for example through the introduction of dedicated hackathons (e.g. Innovation Data Challenge (BdI, 2026[8]), or involving the use of AI-based SupTech tools.
3.1.7. Support whole‑of-government public sector strategic direction for wider AI diffusion in the finance sector
The objective of this policy consideration is to strengthen the whole‑of-government public sector’s strategic leadership in guiding AI development and use in the financial sector by deepening collaboration with industry and academia, and by supporting accessible, compliant AI model development. Such an enhanced form of co‑operation aims to ensure that all firms, including those with fewer resources, can benefit from shared expertise, research, and infrastructure, catalysing responsible AI innovation and contributing to the competitiveness of the financial ecosystem.
Italian authorities could consider fostering stronger collaboration between the public sector, the finance sector and academia, while leveraging existing initiatives (e.g. centres of excellence and AI factories). The authorities could draw on existing initiatives, such as, among others, the Agency for Digital Italy (AGID) Strategy for Artificial Intelligence 2024–2026 and initiatives promoted by the Italian Banking Association’s (ABI) Lab, to facilitate a focus on financial sector applications of AI, while also supporting dedicated research, reskilling and upskilling initiatives in co‑operation with the financial industry.
Official sector support for the development of compliant open-weight models by academia and the private sector could also benefit the Italian ecosystem, especially for firms with smaller budgets, which are unable to develop in-house models. Italy’s advanced IT infrastructure provides a good basis for public sector support for AI model development, drawing on experience from other jurisdictions.3
3.1.8. Strengthen supervisory capacity
The need to equip financial supervisors with the right tools and skills for effective AI oversight in finance is widely acknowledged (OECD and FSB, 2024[9]). Attracting and retaining staff with AI-related skills is a challenge not only for Italian financial sector firms, as reported in the project survey, but also for Italian financial authorities. The objective of this policy consideration is to strengthen the supervisory capacity needed for effective AI oversight by ensuring that financial authorities can attract, train, and retain staff with AI expertise, and by equipping supervisors with modern AI‑enabled SupTech tools. Ultimately, it aims to enhance supervisors’ ability to monitor AI risks, deploy advanced analytical capabilities, and collaborate across borders.
Italian authorities should consider further investment in attracting talent with expertise in AI-related fields, as well as in continuous training and upskilling of existing teams to allow them to combine their domain-specific expertise with a deeper technical understanding of AI systems. Sufficient resources are required to effectively oversee and continuously monitor the evolution of AI deployment in finance, and to allow supervisors to keep abreast of the rapid advances at the technological front. Italian authorities should support continuous upskilling in AI and other digital financial innovation domains, leveraging EU innovative platforms such as the EU Supervisory Digital Finance Academy. Efforts could be made towards the development of a structured competency framework and training curriculum. The co‑operation model with EU platforms and academia should be defined and mapped with relevant sustainable funding mechanisms. Authorities may also consider establishing measurable indicators for supervisory capability enhancement.
Increased capacity and upskilling of financial supervisors will be necessary to achieve monitoring and oversight objectives, but also to enable authorities to develop and deploy AI as part of the supervisory activity (OECD, 2026[1]). Supervisory Technology (SupTech) tools leveraging AI can also play a valuable role in supporting supervisory tasks, bringing benefits such as automation, enhanced analytics and greater responsiveness to emerging risks. Such tools are already being widely deployed by Italian financial authorities and by other national authorities at the EU level.
EU-level supervisory authorities should consider strengthening co‑ordinated efforts to enable the strategic pooling of expertise and institutional capacity, including for AI-based SupTech tools. The development or acquisition of SupTech applications involving AI can necessitate significant financial investment, robust technological infrastructure, and specialised internal expertise. AI technologies may be leveraged for supervisory stress testing and to assess the extent of automation used in the production of critical documentation. Closer collaboration could offer a path to pool resources, share knowledge (e.g. sharing of code) and develop common AI-based tools or share existing SupTech applications (OECD, 2026[1]). Joint engagements at the cross-border level for the development and sharing of SupTech solutions, the use of common platforms or co‑ordinated training initiatives, can be beneficial in pooling resources and avoiding duplication of efforts. An appropriate collaboration model should be identified for facilitating public-private partnerships in this domain.
3.2. Detailed policy considerations
Copy link to 3.2. Detailed policy considerations3.2.1. Strengthen recurring data collection on AI adoption and exposure
Closer co‑ordination of data collection initiatives on AI adoption, focusing on common definitions and taxonomies
OECD analysis indicates that information gaps on the rate of AI adoption by financial firms remain a common challenge across OECD economies. Supervisory challenges may stem from the distinctive characteristics of AI innovation, including its opaqueness, complexity and pace of evolution. In this regard, granular data collection is a key enabler for conducting effective risk monitoring on the use of AI technologies in finance (OECD, 2026[1]). Such limitations regarding the visibility of AI adoption are also recognised by the FSB, which encourages supervisory authorities to address data gaps as appropriate and to harmonise measurement methodologies and metrics (FSB, 2025[10]).
Italian financial authorities have considerable expertise in recurring data collection initiatives with the supervised entities. Banca d’Italia (BdI) has been conducting the FinTech Survey to examine the level of adoption of technological innovations across financial services. This bi‑annual exercise has taken place since 2017, with the 2025 edition incorporating a dedicated chapter on the use of AI and the implications of the transposing of the AI Act (BdI, 2025[11]). Furthermore, in 2025, BdI launched a new survey on unregulated Italian fintech operators to supplement the previous survey. BdI also releases templates for the self-assessment of ICT risks and collects data from the Regional Bank Lending Survey (RBLS) which examines, among other aspects, how digitalisation of banking services affects banks’ geographical structure and relations between regional branches and their headquarters (BdI, 2022[12]). Other recent surveys include the 2023 IT Survey of the Italian banking sector on GenAI, executed by Convenzione Interbancaria per l’Automazione (CIPA) and ABI (CIPA, 2024[13]), as well as CIPA’s annual economic survey (CIPA, 2025[14]).
CONSOB and IVASS are also actively monitoring AI adoption in the supervised sectors, mostly on the basis of periodic supervisory documentation, direct engagement with the supervised entities, as well as outreach with academia and other stakeholders. Indicative examples of data collection initiatives include a CONSOB study on the development of the use of AI in the fields of wealth management, conducted in 2021 with Assogestioni (CONSOB, 2022[15]) participation in the 2025 ESMA survey on the level of AI adoption by financial institutions in the securities sector (ESMA, 2026[16]), and an IVASS survey on the use of Machine Learning (ML) algorithms by insurance companies in their relations with policyholders (IVASS, 2023[17]). AI-related data may also be accessed from other reporting obligations.
These initiatives showcase the strong foundations of data collection around digital finance innovation within the financial sector, conducted by Italian financial authorities. Rather than relying on distinct data-collection methodologies, Italian supervisory authorities could consider enhancing existing data collection frameworks in order to capture AI adoption and experimentation at a more granular level, as a first step through closer co‑ordination of authorities’ initiatives and efforts to adhere to common definitions. Co‑operative efforts could be gradually escalated, starting from enhanced co‑ordination of the survey contents among the institutions, and leading to potentially carrying out a joint, AI-specific survey. Any cross sectoral exercises should be carefully designed to ensure meaningful comparability across sub-sectors of financial activity, accounting for their specificities. Several national initiatives by Italian financial authorities already reflect this approach (e.g. the BdI innovation taxonomy, complemented by IVASS for insurance). Moreover, harmonising definitions and methodologies should be carried out in accordance with any EU-level initiatives, as EU-level guidance will play a key role in strengthening comparability and reducing reporting burdens. Consideration may also be given to the role that statistical authorities may play in contributing to data collection initiatives.
Consider conducting a joint industry-wide data collection exercise
Over time, Italian supervisory authorities could consider a harmonised cross-sectoral approach that would promote consistency and comparability while also alleviating the burden associated with multiple surveys from supervised entities operating across multiple sub-sectors. Such co‑ordinated industry-wide data collection exercises could occur on a recurrent basis (e.g. every two to three years), ideally using a common template, ensuring alignment in definitions, methodologies and reporting cycles. Ideally, such monitoring would also be included within internal organisational structures, for example by establishing co‑ordinating groups for this purpose. In this context, the co‑ordination between contacts in different Italian financial authorities organised for the purpose of this project could serve as a starting point. Such a mapping initiative could also involve the identification of key differences distinguishing AI from other currently supervised technologies.
Four hundred and fifty Italian financial institutions completed the OECD project survey. The number of respondents, and the coverage of all major sub-sectors of the Italian financial industry, provide a representative sample and a novel valuable evidence base on AI deployment in finance in Italy. Other OECD jurisdictions have conducted similar surveys, albeit with smaller numbers of respondents, ranging from 400 in Switzerland to between 100 and 200 respondents in Finland, Japan, Sweden and the United Kingdom (FINMA, 2025[18]; FIN-FSA, 2025[19]; Bank of Japan, 2025[20]; Finansinspektionen, 2024[21]; Bank of England and FCA, 2022[22]). Italian financial supervisory authorities may build upon these project results and consider conducting such industry-wide surveys on a recurrent basis, to strengthen their monitoring efforts, for example as a separate part of the BdI FinTech survey, or within other existing questionnaires.
Italian financial authorities could work towards collecting more granular AI-specific data, which could include the incorporation of new AI adoption metrics, drawing from the 2025 OECD project survey and financial stability risk-related indicators proposed by the FSB. For example, the metrics used for the project survey, related to the technical details of AI models deployed, governance frameworks and constraints to wider adoption could be reused, with updates reflecting recently identified topical areas. New AI adoption metrics may also include the risk-related indicators proposed by the FSB in the FSB toolkit for third-party risk management and oversight. A 2025 FSB report identified a wide range of direct and proxy indicators facilitating efficient monitoring of AI adoption and related vulnerabilities in the financial sector. The FSB promotes the convergence of such indicators with domestic metrics, to be combined with enhanced data sharing among national authorities (FSB, 2025[10]).
Promote convergence of data collection efforts at the EU level
Italian financial firms play a significant role in the EU single market, with many companies operating within multinational enterprise groups, notably FMIs. Monitoring the activity of such institutions may be a challenge for domestic authorities, further exacerbated by the technical complexity of AI technologies and interconnections with third-party AI vendors.
Convergence of data collection efforts at the EU level could help to increase the visibility of such operations, benefiting both domestic and EU-wide monitoring efforts. This would help to reduce the burden on supervised entities that are required to respond to multiple, often divergent, requests, while also improving the quality of the data collected. There is a need to foster greater convergence among European Supervisory Authorities’ (ESAs) data collection efforts to promote coherence, as the resource‑intensive character of recurring data collection, combined with the inherent complexity of AI technology,4 may pose challenges for domestic financial supervisory authorities (OECD, 2026[1]).
Co‑ordination of data collection at the EU level could be beneficial due to the strategic pooling of expertise, institutional capacity and the enhancement of datasets through aggregation of data from across the sectors and EU jurisdictions (OECD, 2026[1]). Italian supervisory authorities are encouraged to continue playing an active role in EU initiatives by contributing to data collection exercises, leveraging on insights from the project survey, and communicating domestically identified insights to EU-level discussions. High quality insights from EU member states enable timelier EU-level responses that account for national differences. Concurrently, particular attention should be paid to avoiding unnecessary duplication of data collection efforts between the national and supranational initiatives.
Furthermore, harmonising terminology and methodological approaches at the EU level could help to increase the comparability of data collected at domestic levels, while also supporting the national efforts related to taxonomy harmonisation. Italy, along with other EU member states, may largely benefit from standardisation of such metrics at the EU-level. In this context, strengthening co‑operation and information-sharing among authorities would therefore be key to achieving a coherent data collection framework.
ESAs play a strategic role in co‑ordinating data collection efforts at the EU level. Such convergence should also extend to the SSM efforts, as greater alignment between ESAs’ surveys and SSM-level initiatives would help to reduce burden on supervised entities, while strengthening the overall consistency of supervisory data collection across the EU financial sector.
ESAs have been conducting extensive surveys on AI adoption across the EU member states. Notably, in 2025, ESMA released a study on the adoption of AI in EU investment funds (ESMA, 2025[23]). In February 2026, ESMA published an article evaluating recent trends related to the use of AI in securities markets, based on a survey conducted in 2025 across the EU (ESMA, 2026[16]). In 2021, the EBA published a paper on ML used in the context of IRB models (EBA, 2021[24]). EIOPA also published surveys on the digitalisation of the EU insurance sector, including on the adoption of AI, Blockchain and the Internet of Things (IoT). The EIOPA report from 2024 includes findings from a 2023 EU-wide market survey, investigating the dynamics, opportunities and risks of digitalisation initiatives in the insurance sector (EIOPA, 2024[25]).
More formalised and periodic data collection initiatives could be conducted alongside the established informal communication channels that domestic financial authorities already operate with their EU peers. Notably, such channels are becoming increasingly formalised and more widespread in their participation.5 Co‑ordination is crucial to ensure supervisory convergence across the EU and to implement consistent principles of proportionality and risk-based supervision.6
3.2.2. Promote and support clarity and simplification of the regulatory and supervisory framework
Promote supervisory guidance, in co‑operation with ESAs
Italian institutions responding to the project survey identified clarity and alignment of regulations as the most significant constraint to AI deployment. A significant proportion of respondents reported the absence of supervisory guidance as a factor shaping the perception of lack of clarity, along with other regulatory challenges. A sound understanding of the interplay between existing sectoral regulations and the new AI-specific regime, and the design of an appropriate supervisory response that avoids overlaps while ensuring consistent risk coverage, is key to promoting the development of safe and innovative AI.
Addressing this challenge will require targeted efforts at EU level to reduce perceived uncertainty and foster consistent regulatory outcomes across member states, ultimately promoting EU competitiveness in AI innovation in a safe and responsible manner, while safeguarding fundamental rights and ensuring financial consumer protection. Supervisory expectations and guidance could provide greater clarity for the finance industry regarding regulatory requirements and how to meet them, thereby promoting more consistent regulatory outcomes (OECD, 2026[1]).
Issuance of public-facing guidance requires EU-level clarifications on regulatory requirements (e.g. via standards and guidelines), followed by co‑ordination between domestic authorities to avoid regulatory fragmentation or pacing mismatches. The potential for inconsistencies and gaps in regulatory oversight, especially for global financial institutions, might lead to undesired fragmentation across the EU and loopholes with potential risk of regulatory arbitrage, highlighting the importance of convergence at the EU level.7 Supervisory guidance could take different forms such as the release of clarifications or supervisory expectations or be communicated through engagement with the industry (OECD, 2026[1]). For example, in the United Kingdom, the Bank of England and the FCA issued a discussion paper regarding specific aspects of the regulatory framework applicable to the use of AI in the UK financial markets, providing an overview of the key rules and guidance under the existing framework that are most relevant to mitigating the risk associated with AI (OECD, 2024[26]).
Any guidance provided should be very carefully designed and calibrated primarily at the EU level, without being overly prescriptive, to avoid negatively affecting AI adoption by impeding firms’ ability to flexibly explore using new technologies while also preserving the highest standard of fundamental rights protection. Any initiative should remain compatible with the relevant regulatory landscape in place, while recognising that not all AI systems pose equal risks. The regulatory and supervisory approach should preferably be principles-based and calibrated to the risks and impacts associated with specific AI use cases, while leveraging existing supervisory safeguards (OECD, 2026[1]).
Fostering harmonisation of supervisory guidance at the EU level, facilitated by common interpretation and guidance developed by EU authorities, can ensure consistency in the application of supervisory approaches across member states and avoid regulatory fragmentation, while equipping national authorities with the tools for monitoring compliance with newly enacted legal frameworks, such as the AI Act.8
EU supervisory authorities are actively monitoring the application of the regulatory and supervisory framework for AI technologies and issuing guidance where necessary. ESMA has published, among others, guidance on the use of automated systems for provision of investment advisory and portfolio management services (ESMA, 2018[27]); guidance on the use of AI in the provision of retail investment services (ESMA, 2024[28]); and, a warning for investors regarding the use of AI for investing (ESMA, 2025[29]). The EBA issued guidelines on Loan Origination and Monitoring (LOAM) in 2020 (EBA, 2020[30]). EIOPA has published, for example: a report setting out AI governance principles for ethical and trustworthy use of AI in the insurance sector (EIOPA, 2021[31]); and, an opinion on AI governance and risk management (EIOPA, 2025[32]). Other EU authorities are monitoring the need to issue guidance for matters indirectly related to the finance sector, such as data protection.
Italian supervisory authorities should continue to contribute actively to ongoing EU initiatives for the simplification of the regulatory framework applicable to AI with a focus on the finance sector, with each Italian supervisory authority retaining primary responsibility for financial activities falling within its respective designated remit. The Italian financial authorities have noted that the implementation of obligations stemming from the EU AI Act will require intense co‑ordination efforts between the authorities, both at the domestic and EU levels, including co‑operation with the non-financial authorities involved.9 A clear definition of roles and enhanced dialogue between horizontal and sectoral authorities should provide more clarity to the market, respecting the mandate of each institution. Also, a comparative exercise between EU jurisdictions could help to identify best supervisory practices for specific AI use cases.10
Furthermore, international co‑operation, for example through information sharing and participation in international fora, promotes alignment between different jurisdictions, ultimately benefiting domestic authorities as well (OECD, 2026[1]). Italian authorities should continue to actively participate in international AI-related co‑ordination initiatives and discussions, such as in the fora provided by the OECD or the FSB, thereby also benefiting from insights from non-EU jurisdictions.
Clarify supervisory expectations for supervised entities (public-facing)
Clarifications related to the implementation of the EU AI Act
More generally, at the regional level, EU authorities should continue to pursue ongoing simplification efforts and strengthen efforts underway to address perceived uncertainties arising from newly enacted legislation and its overlay with existing sectoral and other applicable rules. High-level clarifications on applicable regulatory obligations, together with supervisory expectations and guidance, could help mitigate perceived ambiguity and provide greater legal certainty, facilitating both internal governance and further investment into AI innovation. This can be particularly relevant as the Italian authorities have already flagged certain areas that would benefit from a clarified interpretation, related to governance and risk management, as well as explainability requirements.11 Stakeholders could benefit from further clarification on the exclusions from the regulatory definition of AI of simple statistical techniques such as linear or logistic regression that capture predictive applications, in addition to systems used for mathematical optimisation purposes.
Many project survey respondents disclosed concerns related to the application of the AI Act and its overlap with the pre‑existing sectoral regulation. Clarifying how the AI Act interacts with pre‑existing frameworks could help firms navigate their compliance obligations more effectively and promote legal certainty and consistency across the EU.12
EU institutions are actively monitoring the need to clarify the requirements stemming from the newly enacted AI Act. ESMA is currently taking measures to facilitate National Competent Authorities’ (NCA) assessment of AI‑related market trends supporting AI-related supervisory capacity building, and to assess the gaps and overlaps between the AI Act and the relevant sectoral regulations. The EBA is currently working to promote a co‑ordinated and consistent implementation of the AI Act across the EU banking and payments sector, for example through a summary factsheet which was published on the EBA website in November 2025 (EBA, 2025[33]). Banca d’Italia has promoted the set-up of an informal network with many other EU prudential and market authorities, related to the ongoing implementation of the AI Act. AI workshops are also included in the SSM strategy as part of the supervisory priorities. An EIOPA Opinion provides further clarity on the main principles in the insurance sectoral legislation that are applicable to AI systems, even if not considered as prohibited or high-risk under the AI Act (EIOPA, 2025[32]). Furthermore, the EU Supervisory Digital Finance Academy, established by the ESAs and the European Commission provides AI-related training for the NCA staff, as detailed in section 3.2.8.
The formulation of guidance complements the existing, largely tech-neutral rules, while taking into account the distinct characteristics of AI technologies and practical co‑ordination challenges due to varying levels of maturity across sectoral regulations.13 Financial institutions seem to struggle particularly with the application of AI-related risk management frameworks, explainability requirements and output robustness issues (OECD, 2026[1]).
Italian authorities could consider consolidating and harmonising the existing mapping exercises to ensure that the sectoral differences are communicated at the EU level and possibly addressed in the clarifications. In this regard, BdI is in the process of identifying the areas where further guidance on the use of AI in the banking sector together with other NCAs is needed.14 In the insurance sector, particular challenges have been identified in the application of new AI requirements (and the exception regime) to the complex statistical models used by the insurers long before the GenAI market boom.15 Italian authorities are involved in the development of guidelines at the EU level on the interplay between the AI Act and EU financial services laws. Equally important is the role of Italian authorities in effectively communicating the outcomes of the EU-level clarification efforts to supervised entities at the domestic market.
General clarifications of supervisory expectations
Italian financial authorities may also consider issuing public-facing high-level clarifications on the practical aspects of AI experimentation and deployment by the financial sector, including the treatment of data for training and testing. This may take into account the newly enacted AI regulation, without creating new regulatory layers, while distinguishing sectoral differences and involving authorities outside the traditional scope of financial regulation. These should be closely aligned with the principles and objectives established or under development at the European level, and in close co‑ordination with the European authorities.
Notably, most respondents to the project survey do not see any major conflicts with existing sector rules or regulatory requirements. Instead, firms call for more co‑ordinated and proportionate regulatory guidance, tailored to the particular types of financial sector use cases. Additional clarifications can contribute to providing legal certainty for firms, which may enhance innovation and confidence to direct investment towards AI experimentation (OECD, 2026[1]).
Carefully calibrated clarifications could refer to practical examples of obstacles identified by the supervised entities, relevant to particular AI use cases and experimentation efforts. Providing interpretative guidance and practical clarifications surrounding the existing and emerging high-level principles, for example related to a clear distinction between AI applications supporting internal or operational processes and those directly affecting decision making, market integrity or consumer outcomes, may help firms organise their internal governance frameworks in a robust manner (OECD, 2026[1]). Any such guidance should avoid over-prescription of compliance obligations, especially for smaller entities, which reported regulatory compliance challenges during the project bilateral meetings.
Furthermore, the rapid pace of technological advancement may make overly prescriptive guidance rapidly obsolete. As found in the project survey, currently 39% of respondents use AI as part of their activities, indicating that AI deployment is not yet pervasive. While this could suggest that supervisors may issue guidance later in due course, such an approach must be balanced against the widely reported perceived lack of clarity, notably regarding compliance with the requirements of newly adopted AI regulation.
On a practical level, the Italian authorities could consider including in the publicly issued clarifications the regulatory and supervisory matters that arise in their interactions with industry participants via the innovation facilitator facilities, among other forms of public-private co‑operation (see sections 3.2.5 and 3.2.6).
Support compliance with data governance frameworks
Italian firms responding to the OECD project survey identified compliance with data protection requirements as a significant barrier to AI deployment, with particular emphasis on the complexity of compliance with the General Data Protection Regulation (GDPR) obligations. Notably, these insights capture the respondents’ perception of administrative burden, rather than suggesting that data protection and data governance frameworks fundamentally hinder the development or deployment of AI systems (including sectoral requirements as provided in Capital Requirements Directive or Solvency II Directive). It should be noted, however, that both commonalities and divergences exist between AI principles and privacy principles (OECD, 2024[34]).
Although issues related to personal data protection compliance fall outside the mandates of the financial supervisory authorities, these could consider strengthening co‑operation, co‑ordination and information sharing with national and European DPAs. Project survey results show that Italian financial firms are looking for more clarity regarding the application of data protection obligations in a finance‑specific context. Moreover, the industry seeks clarity on the treatment of non‑GDPR‑governed data used for training and testing (e.g. non-personal data, publicly available data, synthetic datasets), which may be subject to reduced protection, but still raise implications for lawful processing, data minimisation, traceability, and model risk, including bias propagation and unintended personal data inference.
Compliance with data governance frameworks has been reported as a challenge for AI deployment across OECD economies. The efficiency and robustness of AI models are highly dependent on the quality of data used for model training, with errors or biases likely to translate to potentially biased or discriminatory outcomes (OECD, 2021[35]). The opacity and lack of explainability of AI systems further limit supervisors’ ability to examine the possibility of biased or unfair outcomes. Accordingly, supervisory tasks increasingly include promoting efficient data governance practices, necessary to provide structure to vast amounts of unstructured data used for AI model development (OECD, 2026[1]).
At the EU level, the European Commission’s ‘Digital Omnibus’ proposal makes constructive progress toward the streamlining of regulatory requirements related to AI and data governance, aiming to bring considerable administrative relief. The package is expected to improve access to data by consolidating various data regulations, simplifying cybersecurity reporting, and offering new guidance (EU, 2025[36]). While the Digital Omnibus remains a proposal, Italian financial authorities have an opportunity to engage in the ongoing discussions, for example by identifying the areas where supervised entities, notably smaller firms, could benefit from broader clarifications of the data protection and governance requirements. Once such concerns are identified, Italian financial authorities could consider collaborating with data protection authorities to communicate clarifications on these issues to supervised entities. Authorities could promote industry initiatives to identify practical solutions to the detected issues. Consideration may also be given to spreading awareness concerning the use of technical and organisational tools that support robust data governance practices by the supervised entities. The OECD AI observatory is operating a catalogue of tools and metrics that aim to promote trustworthy AI (OECD.AI, 2026[37]). Other categories of procedural and educational tools can assist with operational guidance and upskilling efforts for all stakeholders. Measures such as fairness metrics and bias mitigation tools may be suggested as possible additions to internal data governance frameworks (OECD.AI, 2026[37]).
Enhance co‑operation with non-financial authorities
The cross‑cutting nature of AI and the potential for regulatory fragmentation when horizontal regulations interact with sector‑specific financial rules, highlight the need for a structured co‑ordination between financial supervisors and non‑financial authorities (data protection, competition, cybersecurity). More harmonised frameworks for interacting with non-financial authorities, and synergies for cross-regulatory co‑operation as initiated at the OECD, may be crucial for clarifying challenges encountered by financial sector participants regarding AI deployment.
In Italy, Law 132/2025 on the “Provisions and powers delegated to the Government regarding artificial intelligence” creates a mandate for AgID and the National Cybersecurity Agency (ACN) to act as national authorities for AI. This is without prejudice to the roles of the financial supervisors (BdI, CONSOB, and IVASS), which are also designated as market surveillance authorities pursuant to and in accordance with Article 74(6) of Regulation (EU) 2024/1689. Thus, the enforcement of the Italian AI law will involve co‑operation across different authorities, which could be supplemented with less formalised exchanges and remain responsive to emerging challenges.
EU data protection bodies, notably the EDPB, along with the national DPAs, are actively monitoring the need to issue clarifications regarding the applicability of the requirements to AI model deployment. For example, the EDPB adopted an opinion in 2024 on the use of personal data for the development and deployment of AI models (EDPB, 2024[38]). At the national level, the Italian DPA (Garante per la protezione dei dati personali) is active in issuing guidance for the application of AI technologies in specific fields, including the use of AI in healthcare services, law enforcement and the detection of tax evasion (Garante, 2024[39]).
Without encroaching upon the mandates of data protection bodies, Italian financial authorities should consider engaging further in the ongoing discussions by adding greater granularity of insights specific to the finance sector. For example, clarifications on what would constitute legitimate interest as the legal basis for the collection of data could be particularly helpful for the training of AI models by the finance sector (and beyond).
Furthermore, co‑operation with non-financial authorities may also facilitate the implementation of other policy considerations related to encouraging stronger AI governance (see sections 3.2.3 and 3.2.4).
3.2.3. Require stronger AI governance arrangements for supervised entities
Supporting efforts to promote stronger governance structures
Effective and robust arrangements for organisational AI governance are crucial to manage risks of AI in finance. Effective governance should encompass human oversight, safety, security and accountability in alignment with what is set out in the OECD AI Principles, the first intergovernmental standard on AI adopted in 2019 and updated in 2024 (OECD, 2024[40]). Deployers of AI technologies, like all actors along the AI system lifecycle, should ensure the security, safety and robustness of AI systems throughout their lifecycles, and mitigate risks of harm (OECD, 2024[40]).16
The OECD project survey revealed that financial firms are currently applying a wide range of approaches to AI governance, with the use of AI strategies, guidelines, principles and/or codes of conduct tailored to their use of AI applications, as well as cybersecurity and operational risk frameworks reported as the most common frameworks. This creates a layered and heterogeneous framework that while effective in addressing emerging issues, may also result in policy and supervisory fragmentation for specific AI use cases (OECD, 2025[41]). The supportive efforts may include different aspects discussed in this section, such as providing an overview of cross-sectoral assistance on governance frameworks in use, or engaging on specific issues related to non-critical third-party arrangements or strengthening cyber-resilience frameworks.
Italian authorities could consider supporting and promoting the strengthening of organisational governance structures by ensuring that boards and senior management establish robust, risk-proportionate governance, with ultimate accountability resting with the board. In line with regulatory requirements, authorities may encourage boards of entities deploying AI to set effective strategies for AI development and use, and to adopt robust policies for managing associated risks, including operational, legal and reputational risks. Boards should periodically review AI’s contribution to performance and ensure senior management appropriately monitors and mitigates AI‑related risks. Ensuring the security, safety and robustness of AI systems should also involve producers (safety by design). Italian authorities may also encourage supervised entities to provide supervisory authorities with adequate and timely information regarding their use of AI tools.
High-level, cross-sectoral assistance on the development of AI governance frameworks
Italian authorities could consider supporting stronger governance structures through closer engagement with supervised entities or through high-level, cross-sectoral guidance that takes a risk-based approach. Such collaboration should underscore the importance of maintaining regulatory approaches within established frameworks, thereby minimising unnecessary administrative burdens while enhancing overall system resilience.
In line with the OECD AI Principles, which recommend systemic risk management approaches to the AI system lifecycle phases and the EU AI Act, a risk-based approach is recommended, as risks may vary across sectors and be more present in different areas (for example, privacy, cyber, bias or financial stability risks), while also having various levels of impact on market integrity and resilience (OECD, 2023[42]).While processes, operations, risk scope, AI system lifecycles and specific terminology may diverge, the ultimate purpose of AI governance frameworks remains the same – supporting responsible, ethical and trustworthy AI (NIST, 2023[43]). Governance environments for AI should be interoperable, so that they can be enhanced for specific AI purposes. Robust data and model governance frameworks for AI systems should be implemented with appropriate human oversight and validation across all segments of the financial system.
Cross-sectoral assistance may support firms in adapting existing governance processes. The framework could include general rules and guidelines relating to the definition, assessment, treatment and governance of identified threats.
Italian supervisory authorities can help firms adopt appropriate AI governance structures by providing an overview of the models currently in use by financial sector participants. Specifically, the authorities could guide Italian financial sector participants by providing high-level, generic assistance on different types of governance frameworks, in line with a gradual approach and in close co‑ordination with other Italian authorities. This could be particularly beneficial for smaller entities and firms operating across different sectors, while also facilitating cross-sectoral harmonisation of regulatory requirements (OECD, 2024[26]). The documentation, communication and comparability across sectors would enable stronger oversight and proactive policy formulation supporting robust AI governance (OECD, 2023[42]).
In addition, one practical measure could be the development of standardised multi-sectoral AI terminology and definition catalogues, which can be used as common reference bases for supervised entities, and also supervisory authorities, thereby optimising compliance and enforcement costs (OECD, 2023[42]). Such an exercise could be combined with the general effort of ensuring clarity around the newly established AI regulations (see section 3.2.2).
To support the development of cross-sectoral structures for AI governance in the financial sector, co‑operation amongst supervisory authorities is a key prerequisite. The formal establishment of inter-sectoral working groups, which should have clear objectives and roadmaps, is encouraged to support and nurture operational resilience, trust and stakeholder confidence, while also encouraging innovation (GARP, 2025[44]). It is also recommended that supervisory expectations aligned with EU regulation are clarified and that perceived lack of regulatory clarity is addressed, as discussed in section 3.2.2. Furthermore, such efforts should be accompanied by general training and development initiatives, as discussed in sections 3.2.7 and 3.2.8.
Assisting firms in the governance of non-critical third-party arrangements17
Third party providers can range from high criticality to non-critical, the latter not being subject to the EU oversight framework. However, the “non-critical” third parties may play key roles in facilitating AI deployment. Closer engagement and potential guidance could help firms increase transparency around these arrangements and manage associated risks. This could also help to cope with the challenges related to the overall perception of regulatory certainty regarding the applicability of emerging technologies and the establishment of sectoral rules, as flagged by OECD project survey participants and discussed in section 3.2.2.
DORA establishes a framework for operational resilience in the EU finance sector. DORA implementation may introduce certain challenges related to the mapping of third parties and/or definitional issues (i.e. uncertainty about the classification of third-party providers). As DORA’s core obligations continue to apply to in-scope financial entities, AI-related services such as a customer-service chatbot can create potential operational and compliance risks for financial entities.
Considering that not all third parties may be subject to the regulation or oversight of financial sector supervisory authorities, it is essential to promote cross-sectoral co‑operation between the competent authorities in financial market-related sectors. A proactive and multi-segment approach can allow for an efficient, real-time identification of risks related to AI deployment, going beyond the financial sector (OECD, 2024[26]; FSB, 2024[45]; US Department of the Treasury, 2024[46]; ECB, 2024[47]).
Challenges related to the governance of non-critical third-party providers are expected to rise with the proliferation of GenAI-based tools, as the deployment of GenAI tools is marked by low transparency (OECD, 2023[48]). Robust governance frameworks should account for the changing technological landscape, including the autonomous action of any system with GenAI capability (OECD, 2024[26]; 2023[48]). In a wider context, Italian authorities may monitor and consider the issue of third-party service concentration from a systemic risk perspective, as widely discussed in international fora (OECD and FSB, 2024[9]; FSB, 2024[45]).
Promoting use of explainability methods
Being able to explain a model’s output to people (known as explainability) is crucial for ensuring transparency, accountability and building consumer trust, including through the use of specific techniques such as SHAP or LIME. The importance of explainability of AI-based decisions may be particularly relevant to financial markets due to the potential for consumer detriment or even systemic risks (OECD, 2023[48]; 2021[35]).
The OECD AI Principles, the Basel Core Principles and the Insurance Core Principles have all addressed explainability and emphasised the importance of independent validation, assessment of technical provisions, ensuring suitability for their intended purpose, and allowing for scrutiny and periodic reviews (Perez-Cruz et al., 2025[49]). The OECD Recommendation of the Council on AI specifies transparency and explainability as one of five complementary values-based principles relevant to all stakeholders (OECD, 2024[40]).
At national levels, the Canadian Office of the Superintendent of Financial Institutions (OSFI), the Financial Services Agency of Japan (FSA), the Prudential Regulation Authority (PRA) of the United Kingdom, the Federal Reserve Board (FRB) and the US Office of the Comptroller of the Currency (OCC) are all aligned on the imperative need for AI explainability (Office of the Superintendent of Financial Institutions, 2025[50]; Financial Services Agency of Japan, 2021[51]; Bank of England, 2025[52]; Federal Reserve, 2025[53]; Office of the Comptroller of the Currency, 2021[54]).
Project survey respondents reported that the adoption of certain AI models is often limited due to explainability considerations, especially in highly regulated areas such as pricing, underwriting, and claims management. Under the EU AI Act, transparency requires AI systems to allow for explainability and traceability.
Italian authorities should therefore consider promoting further use of explainability methods for AI models in a proportionate manner, given the current limited use of such tools. As there is no single “correct” approach to explainability, guidance and support could include risk-based principles, reflecting the level of materiality of different finance‑sector specific AI use cases and their potential impacts. An overview of different governance arrangements currently in use by Italian financial firms, as discussed above, could also include the identification of various explainability methods deployed by supervised entities. In other words, a stocktaking exercise mapping the explainability methods currently employed by Italian financial-sector firms would help deepen understanding of how such methods are used in practice within the relevant contexts.
Supervised entities should be encouraged to provide adequate and timely information to supervisory authorities on their use of AI tools. The suggested cross-sectoral assistance on governance models would ensure that different explainability standards and thresholds are consistent with Articles 13, 27 and 50 of the EU AI Act (European Union, 2024[55]) but with a more deliberate focus on the financial markets and related activities.
Promote AI-specific financial consumer protection and literacy
In line with clearer explainability, as requested under the AI Act, both financial service providers and Italian authorities should promote consumer AI literacy in financial services. This can help consumers use AI‑enabled financial services safely and recognise online fraud, such as phishing and other dangers, reducing the risk of account takeover or data theft. Financial consumer AI literacy complements the general financial consumer protection framework, enabling better understanding by consumers of safe online behaviour practices, thereby strengthening trust in digital finance in general (World Bank, 2025[56]; OECD, 2025[57]).
In this regard, the OECD Recommendation on Financial Literacy is the leading global instrument on financial literacy, designed to assist governments, public authorities, and relevant stakeholders in their efforts to design, implement and evaluate financial literacy policies (OECD, 2020[58]). The Recommendation encourages governments and other stakeholders to promote the understanding of the characteristics and risks of traditional and innovative financial products and services, and to empower individuals to use them, depending on their personal situation. Limited digital literacy may also restrict the extent to which consumers and investors can use AI tools to their own benefit, including understanding the opportunities and the risks (OECD, 2023[59]). Furthermore, in September 2025, the European Commission launched the Communication on a financial literacy strategy (European Commission, 2025[60]), which should be used by both providers and Italian authorities in their efforts to promote consumer AI literacy in financial services. The financial literacy strategy aims to help citizens make sound financial decisions, and ultimately to improve their well-being, financial security and independence. In the context of growing AI applications in financial markets, it can strengthen citizens’ capacity to engage confidently with AI-enabled financial products and services, enabling them to benefit from innovation in a safe, responsible and well-protected manner.
Robustness in AI cyber-resilience frameworks
As found in the project survey, almost half (46%) of respondents have not implemented any specific safeguards to address the emerging AI-specific cyber threats. Italian authorities should consider emphasising the critical importance of robust cyber-resilience frameworks addressing AI-related risks, promoting the reinforcement of cyber-specific preparedness of supervised entities related to AI adoption, while aligning these efforts with the DORA framework. The authorities should continue to pursue their close co‑ordination, both domestically and at the EU level, with cybersecurity agencies, to ensure that firms and their ICT service providers comply with DORA requirements and implement effective cyber-resilience frameworks for AI.
The OECD AI Principles state that AI systems should be robust, secure and safe, and should not pose unreasonable safety and/or security risks. The Principles urge that if AI systems risk being used for malicious purposes (e.g. deepfakes)18 or cause undue harm or exhibit undesired behaviour, they can be overridden, repaired and/or decommissioned safely as needed (OECD, 2024[40]). Although the OECD has not yet established an explicit, formal definition of cyber resilience, the concept is encompassed within the broader framework of digital security (OECD, 2024[40]), as well as within an explicit recommendation on the application of a systematic risk management approach (OECD, 2024[40]).
AI actors are required to employ a systematic risk management strategy throughout every phase of the AI system lifecycle, taking into account their specific roles, context, and capabilities. Pursuant to the AI Act, the scope of risk management should address concerns such as harmful bias, human rights – including safety, security, and privacy – as well as labour and intellectual property rights (OECD, 2024[40]).
Under Article 19, DORA imposes obligations related to reporting of major ICT-related incidents to the competent authorities, along with all the relevant information on the incident, in order to assess its significance and possible cross-border impacts. The same article also creates a voluntary basis for financial firms to notify the competent authorities of relevant cyber threats that could impact the financial system (EU, 2022[61]).
The OECD AI Principles are aligned with DORA through their emphasis on an agile policy environment that includes reviewing or adapting frameworks where necessary. This closely correlates with DORA’s resilience testing and incident preparedness measures. Additional consistency is evident in the advocacy of policy-driven digital security, aligning with DORA’s incident reporting protocols and third-party oversight obligations (OECD, 2024[40]; EU, 2022[61]).
To that end, current operational protocols for information exchange and joint incident reporting among financial and cybersecurity authorities could be integrated to address AI-specific threats, such as adversarial attacks and model vulnerabilities, ensuring timeliness while also avoiding duplication of efforts for DORA compliance. Co‑ordination between financial supervisory authorities and the National Cybersecurity Agency is strongly encouraged, underpinned by an incident reporting framework that is aligned with the OECD AI Principles, also extended to international co‑operation (OECD, 2024[40]). Financial market supervisory authorities are also encouraged to promote AI research and development including cross-sectoral and interdisciplinary efforts (OECD, 2024[40]).
AI for cyber resilience
Italian authorities may encourage the use of AI-based tools to strengthen cybersecurity and operational resilience across the Italian financial sector, particularly for FMIs. Initiatives could include voluntary certification schemes, collaborative networks such as CERTFin and the sharing of best practices among financial institutions, addressing emerging AI-specific threats such as model/data manipulation and adversarial inputs. This is particularly relevant in the context of the G7 Cyber Expert Group’s “Statement on AI and Cybersecurity”, calling for ongoing monitoring of the AI-related risks (G7, 2025[62]).
Italian authorities could consider adopting a proportionate, non-prescriptive approach, ensuring that measures remain voluntary and supportive, while helping firms identify and mitigate AI-related risks in practical experimentation and learning environments. Efforts should be informed by recent cross-border discussions,19 and by ongoing supervisory exchanges and workshops. They could provide a unique contribution by framing AI as both a source of operational risk and a practical tool for cyber defence, offering concrete, data‑informed support to institutions and supervisors alike, and reflecting an original approach tailored to the Italian financial ecosystem.
AI scenarios in operational resilience testing
Italian authorities could encourage the systematic inclusion of AI-related risk scenarios within existing operational resilience and cyber-testing frameworks, such as DORA and TIBER-EU, with particular attention to FMIs. Evidence from the project survey indicates that AI is already deployed or actively being experimented with by most Italian FMIs, while at the same time fewer than one in ten financial institutions report mature and continuously updated safeguards against AI-specific cyber threats, highlighting a potential vulnerability in critical market functions.
AI-focused testing scenarios could therefore address concrete failure modes and attack vectors observed in practice and international policy discussions, including compromised or corrupted training data, model manipulation, adversarial inputs, and disruptions affecting the availability or integrity of AI-enabled services supporting critical operations. Embedding such scenarios into resilience testing would support earlier identification of weaknesses in AI systems and their dependencies, including third-party and cloud-based components.
A pragmatic implementation path could leverage AI sandboxes and other innovation facilitators as environments for piloting and refining advanced resilience scenarios during the experimentation phase, particularly for AI solutions intended for use in critical market infrastructures. This approach would allow resilience considerations to be incorporated upstream, while generating supervisory insight and operational evidence that can progressively inform expectations for the most systemically relevant entities, in line with existing operational resilience frameworks and international best practices.
Promote the development of national guidance and reference taxonomy for AI-related incidents
Promoting the development of national guidance and a reference taxonomy for AI-related incidents, accompanied by a concrete reporting reference framework, would be beneficial.20 This would provide a structured classification of AI incidents (e.g. direct causes, contributing factors, severity and impact types) and a limited set of consistent reporting fields, enabling firms and authorities to aggregate, analyse, and reuse incident information already collected under existing frameworks.
Evidence from the OECD project survey points to a growing relevance of AI-related incidents and cyber events, alongside increasing complexity and fragmentation of reporting requirements. This is particularly challenging for smaller and less resourced firms, while for FMIs and systemically relevant intermediaries the key issue is the potential propagation of operational and cyber risks across the financial system. A shared taxonomy and reporting reference would help address both dimensions, by easing classification and reporting efforts for smaller actors and enabling effective aggregation, trend analysis and risk monitoring for critical entities, including through anonymised data analysis where appropriate.
The initiative would be aligned with ongoing international work on AI incident classification and reporting, such as the FSB’s FIRE framework (FSB, 2025[63]) and designed to be interoperable with existing regimes, including incident reporting provisions under the EU AI Act, the OECD AI incidents reporting framework developed in partnership with the European Commission, and other applicable operational resilience frameworks. By improving the consistency and comparability of incident information, such guidance could strengthen supervisory understanding of emerging AI risks while supporting market participants in developing more resilient AI practices.
Promote a national AI ecosystem to strengthen supply-chain resilience and facilitate access to innovation
The results of the survey point to a heavy reliance on a concentrated number of third-party vendors for AI-related services in Italy. The results also show that financial market participants expect an increasing use of third-party based AI models across key financial market functions, particularly in compliance‑related areas such as AML/CFT, as well as more operational areas such as pattern recognition, asset and portfolio management and risk modelling.
Further evidence from the project survey highlights that a significant share of cyber incidents affecting Italian financial institutions occur through third- and fourth-party providers, sometimes leading to material or system-wide impacts. Smaller market participants often lack the resources or expertise to assess and manage these risks effectively, while critical FMIs and key market participants operate in an environment where failures can propagate rapidly throughout the system.
Additional insights from the mapping exercise also show that while AI currently plays a predominantly operational role, Italian authorities and intermediaries recognise its potential to become a central factor in market activity over the medium to long term. A targeted policy option would therefore promote a national AI ecosystem that provides secure, trusted, and interoperable solutions accessible to smaller intermediaries. This approach would help reduce concentration and dependency risks in the AI supply chain, facilitate adoption of AI innovations and enable smaller players to leverage AI capabilities without incurring disproportionate governance or compliance burdens.
For FMIs and systemically relevant participants, this ecosystem would support more effective monitoring and management of third-party dependencies and cyber risks. For smaller operators, it would improve access to reliable AI services, promote innovation, and reduce barriers to entry. In both cases, the approach would strengthen the overall resilience, inclusivity, and competitiveness of the Italian financial system. This initiative aligns with international discussions on AI and cyber risk management (e.g. G7 Cyber Expert Group Statement on AI and cybersecurity (G7, 2025[62]).
3.2.4. Promote safe data-sharing frameworks and practices
Foster safe data-sharing, operationalise open-finance technical standards and promote other data-sharing frameworks
This section proposes measures to support efficient and safe data-sharing frameworks in Italy, as a way to facilitate the development of AI tools while protecting consumer rights. This complements the policy considerations in section 3.2.2, which focus on facilitating firms’ compliance with data governance requirements.
Italy’s existing data-sharing frameworks
The current approach to data sharing and interoperability in finance in Italy is based on the EU’s Payment Services Directive 2 (PSD2) framework, not on broader open finance. Safe data-sharing exists in some areas, but there are no common systems or clear rules to make it work at scale. There is no indication of a binding, cross‑sector API regime beyond payment accounts, with most cross‑institution sharing handled through bilateral contracts and heterogeneous interfaces that limit data portability and reusability. Industry conventions exist in interbank automation and secure messaging, yet they do not amount to uniform data package schemas, common taxonomies or end‑to‑end standards for concerted data portability across product lines. In the Italian insurance sector, current rules allow for – but have not yet implemented – the portability of vehicle IoT data and the sharing of claims information across all non-life business lines. Consented data reuse for analytics is constrained by non‑uniform formats and duplication of onboarding steps.
OECD project survey respondents report frictions linked to data licensing, uneven data quality, and non‑standard schemas that make cross‑firm sharing costly to operationalise. Heavy reliance on bespoke data pipelines and vendor‑specific interfaces further raises switching costs and reduces portability. These conditions slow the creation of shared, high‑quality, sector-specific datasets that would otherwise underpin interoperable AI solutions across institutions.
Almost one in three survey respondents report data accuracy and consistency as barriers, and over one in four face difficulties accessing necessary data. Among the Italian firms that responded to the survey question on types of training data used, 88% use internal training or fine‑tuning data for AI-related applications, while 61% rely on publicly available data. Only 22% disclosed the use of third-party licensed data, and 18% indicated that they employ acquired datasets for training or fine‑tuning purposes. These results indicate the importance of both keeping internal datasets of high-quality for in-house AI model development, and of better data-sharing frameworks for AI development for certain use cases.
Box 3.1. Approaches in other jurisdictions that show positive results
Copy link to Box 3.1. Approaches in other jurisdictions that show positive resultsBrazil’s Open Finance framework shows how regulation‑led API standardisation can be user‑centric and safe, while achieving mass adoption that readies the ground for AI. The regulatory path from payments modernisation to Open Banking and Open Finance includes mandatory sharing via standardised APIs, strong consent, and cyber safeguards – implemented under Central Bank ordinances and hybrid governance with industry. That structure has scaled: by September 2025 Open Finance Brasil reported 77 million accounts connected, 118 million active consents and over 4 billion weekly API calls, alongside concrete use such as 35 million users of account aggregation, BRL 14 billion in H1 2025 investment aggregation transactions, and BRL 31 billion in credit operations supported by shared data – evidence that safe interoperability can unlock real customer and market value and, critically, a rich, standardised data substrate for next‑wave AI use cases.
One of Singapore’s Open Finance building blocks, SGFinDex, illustrates a public – private, API‑enabled data utility that is already feeding AI‑enhanced services. Developed by the Monetary Authority of Singapore with banks and government agencies, SGFinDex lets individuals aggregate bank, government and insurance data through consent secured by Singpass (Singapore’s digital ID), providing a common, secure interface for multi‑institution financial planning. On top of this, the Singaporean industry has built AI‑powered planners and nudging tools: firms leveraged SGFinDex and AI to deliver hyper‑personalised insights and broaden inclusion, extending planning to all residents while using consented, standardised data streams. Independent research has mapped Singapore’s growing preference for such market‑driven or guided Open Finance approaches that pair interoperability with privacy, creating fertile ground for scalable AI personalisation.
EU and local initiatives can help Italy move towards more interoperable data-sharing frameworks
The interplay between safe, standardised data sharing, Open Finance APIs and AI creates complementary benefits across innovation, inclusion and risk control. Interoperable, consent‑based data flows lower integration costs, improve data quality, and enable cross‑market portability, which in turn expands the training and validation datasets needed for robust AI while safeguarding consumers through clear accountability and purpose limits. Consistent taxonomies, common data packages and auditable consent lifecycles make model outcomes more testable across institutions, because firms query like‑for‑like fields rather than bespoke formats; this predictability strengthens downstream explainability and monitoring, and supports pro‑competitive entry by reducing vendor lock‑in (OECD, 2026[3]). Italian authorities may enhance the collaboration with other financial and non-financial authorities to encourage standardised financial data-sharing as a means to foster AI innovation.
Safe data‑sharing also strengthens authorities’ ability to spot emerging risks. Interoperable, machine‑readable data is associated with better anomaly detection and outcome testing, because institutions and authorities can query standard fields across firms without bespoke transformation (OECD, 2024[26]). Progress on AI for policy depends on data governance and infrastructure; standardised access paths reduce lags and ambiguities that otherwise blunt risk monitoring. Interoperability therefore acts as a two‑sided enabler, supporting both business innovation and supervisory insight while preserving clear consent and purpose limits (BIS, 2025[69]).
A standards‑first approach would also temper vendor concentration and improve portability. Common APIs and shared taxonomies reduce dependence on proprietary gateways, making multi‑cloud and multi‑vendor strategies easier to execute. Interoperable consent dashboards and auditable logs would clarify responsibilities under data protection law, lowering legal uncertainty at scale. Over time, this lowers barriers for smaller institutions to participate in data ecosystems, widening the pool of contributors and the diversity of training data for responsible AI (OECD, 2021[35]; Crisanto et al., 2024[70]).
EU‑level instruments point to a path where common APIs, harmonised data packages and auditable consent services support innovation, while safeguarding fundamental rights. The FiDA proposal, once operational, will create a cross‑sectoral right to access and share financial data through standardised interfaces, coupled with consent management, liability rules and technical standards that reduce fragmentation (European Commission, 2023[71]). The Savings and Investments Union (SIU) strategy highlights interoperability as a competitiveness lever, aiming to lower frictions in cross‑border participation and to widen retail access to investment markets (European Council, 2025[72]; European Commission, 2025[73]).
Importantly, even if the approval of the EU frameworks takes time or stalls altogether, national initiatives can still encourage voluntary or sector‑specific data‑sharing initiatives to accelerate progress. If implemented, FiDA technical standards can help to prioritise clear consent artefacts, harmonised data packages, and uniform scopes that cover key retail and SME datasets beyond payments. A conformance and certification layer can make participation predictable for incumbents and new entrants, while published reference test suites reduce bilateral negotiation costs (European Commission, 2023[71]). The SIU strategy’s objectives align with this direction, as interoperable access to investment data supports household participation and product innovation without prescribing business models (European Commission, 2025[74]). The SIU agenda connects interoperable access with household wealth creation and market depth (European Commission, 2025[74]).
Italian authorities may take advantage of this policy momentum by enhancing the cross-sectoral collaboration with other authorities, along with the industry consultations, to conceptualise how to support data-sharing frameworks in the financial sector, along with the AI innovation. This would help to address the costs involved in the development of in-house AI models, which one in four survey respondents reported as a significant limitation. Specifically, the costs of acquiring data necessary for model development were identified as a large constraint by around 10% of respondents. Interoperable APIs and shared data architectures are linked to lower integration costs, clearer accountability chains and better data quality – conditions that are decisive for model training and monitoring (OECD, 2024[26]).
These initiatives may also encompass discussions regarding the relevant practices, methods and measures for secure data exchanges and interoperable architectures, such as the use of harmonised APIs, uniform data formats and auditable consent mechanisms. Interoperability is achieved through clear API standards, digital consent records, and compliance checks. Typical building blocks include standard data packages for accounts, payments, credit and investments; clear access boundaries and consent lifecycles; certification of participants; and dispute‑resolution mechanisms that provide a safety net to commercial agreements. These frameworks are technology‑neutral but data‑specific, which keeps privacy, security and fairness controls embedded in the interface layer (OECD, 2023[48]).
Productivity, inclusion and performance gains arrive faster when institutions can reuse consented data across functions through stable interfaces rather than bespoke feeds (OECD, 2023[48]). Encouraging the uptake of these – or comparable – data‑sharing standards would yield clear benefits for firms and supervisors alike. It would also enhance risk monitoring by enabling machine‑readable, consent‑driven data flows that support anomaly detection and supervisory oversight. Such a transition should be executed at a responsible pace, with an emphasis placed on ensuring the application of common standards, aiming towards enabling high‑quality datasets that support reliable, explainable AI.
Promote participation of financial firms in EU common data spaces and contribute high-quality public-sector datasets where legally possible
Italy’s current stance on participation in common data spaces
Italian authorities have promoted innovation through innovation facilitators and dialogues (see section 3.2.6), but a framework that systematically enables Italian financial institutions to engage in Italian or European Union common data spaces would assist AI development. Efforts to improve interoperability and data quality are present yet fragmented and largely voluntary, and firms report hurdles in accessing rich, finance‑specific training datasets. Interest in European initiatives exists, but formal participation mechanisms remain limited, leaving institutions reliant on bilateral arrangements or external vendors, which constrains scalability and trust.
EU’s common data space projects and their potential for Italy
CEDS will seek to serve as the backbone of the single market for data. They establish trusted environments where public and private actors can share data under fair, transparent access rules, privacy‑preserving infrastructures and governance. Finance is one of the 14 CEDS industry domains, and the EFDS is in the implementation phase. The horizontal enablers of CEDS are cross‑sector legislation, shared tools and the implementing Act on high‑value datasets under the Open Data Directive, which ensures free reuse of key public data. For Italian financial firms, engagement with CEDS would allow firms to work with curated, standardised datasets, combine them with proprietary holdings, and maintain auditable provenance and usage policies (European Commission, 2025[5]).
Within this architecture, practical enablers help authorities and market participants build and operate data spaces. The Data Spaces Support Centre (DSSC) offers a blueprint, toolbox, maturity model and co‑creation method, describing building blocks across governance, legal and technical layers. These layers include identity and attestation, access and usage policies, exchange and traceability, and audit trails (Data Spaces Support Centre, 2025[4]). The Interoperable Europe Semantic Interoperability Community (SEMIC) complements this by providing semantic specifications such as the Data Catalogue Vocabulary Application Profile, training and webinars, and work on personal data spaces aligned with the European Data Governance Act,21 including data intermediaries and data altruism (European Commission, 2025[75]). Linking Italian firms to these communities would ease catalogue harmonisation, cross‑border discovery and consent‑aware reuse, thus reducing integration costs and strengthening trust.
Italian authorities should also consider conceptualising how to promote participation in EU common data spaces (e.g. EFDS, Gaia-X). Initiatives could also involve clarifying how CEDS participation interacts with existing financial regulation, and guiding supervised entities on how liability, accountability and auditability are still expected when data is accessed via a data space. Italian financial authorities can also consider contributing to selecting high-quality public-sector datasets available through the European data spaces, where legally possible. Finally, co‑ordination may be enhanced between national market participants and EU-level dataspace initiatives, contributing to these through national supervisory experiences and best practices as well as through the identification and reporting of any cross-border frictions.
The European Data Union Strategy situates data spaces within a wider plan to unlock high‑quality data for AI. It proposes hands‑on data labs to bridge data spaces and AI ecosystems, scaling sectoral access for companies and researchers. It also streamlines rules through the Digital Omnibus alongside the European Data Act to lower compliance costs and reduce legal friction. Horizontal enablers include standards for data quality, expansion of high‑value datasets and investment in synthetic data capacity to improve the fitness of training datasets. Italian authorities should consider participating in data labs to accelerate curation and labelling of financial datasets, while “one‑click compliance” tokens and guidance on the Data Act promise machine‑verifiable reuse conditions and clearer documentation of rights (European Commission, 2025[6]).
The European Commission’s interoperability layer for data spaces will enable secure data exchange across different infrastructures in plain, testable ways. It comprises an open‑source software stack, a testing environment and managed deployments, so participants can share data under enforceable usage policies and monitor performance. Italian authorities could stimulate licensed firms to use the testing environment to assess interoperability before production and the managed deployments to run sector spaces with transparent logs and access controls (European Commission, 2024[76]).
Gaia‑X is a European industry‑driven initiative involving a federated data and cloud ecosystem that ensures sovereignty, interoperability and trust. Gaia‑X develops specifications, governance frameworks and verification services, including trust and labelling frameworks, identity services and compliance checks, providing auditable onboarding and usage control (Gaia-X, 2023[7]). Gaia‑X is already active in Italy through Gaia‑X Italia, which brings together leading Italian firms and research bodies to develop national hubs and use cases aligned with European standards, offering Italian financial institutions a direct channel to adopt Gaia‑X federation services (Gaia-X, 2025[77]).
The EFDS, currently under development, should enable Open Finance by allowing financial data to flow among stakeholders under trusted rules. If implemented with strong safeguards, the EFDS can help reduce bias in AI by giving developers access to broader, more representative datasets, notably in consumer creditworthiness assessment; this requires a clear lawful basis under GDPR, effective transparency and meaningful individual control over processing. EFDS’s value will hinge on robust data quality, namely fitness for purpose, completeness, accuracy and usability of formats (Penedo and Kramcsák, 2023[78]; Borowicz, 2024[79]).
Promote safe data-sharing practices
Italian authorities could also consider supporting any efforts by DPAs to raise awareness concerning the use of safe data-sharing practices, on the basis of enhanced co‑operation to promote proactive initiatives. The OECD industry survey indicates that Italian firms perceive regulatory uncertainty and data governance constraints as barriers to broader deployment of AI, with pressure points around data access, privacy compliance, third‑party reliance and costs. Italian financial authorities may engage with non-financial authorities (e.g. data protection and cybersecurity authorities) to conceptualise promotion of secure, standardised and interoperable data-sharing frameworks that ensure privacy-preserving data flows. Such frameworks may provide firms with reliable, high-quality datasets that can be used for AI system development. Data-sharing initiatives may also be promoted through innovation facilitators, as discussed in section 3.2.6.
An example of such practices may be found in Privacy Enhancing Technologies (PETs), as tools and methods that enable analysis and sharing of data while preserving privacy, confidentiality and compliance. They help institutions collaborate and extract value from data without exposing personal or commercially sensitive information (OECD, 2025[80]). There are diverse technologies that enable safe and efficient AI development. For example, homomorphic encryption lets computations run on encrypted data, revealing only the result and reducing disclosure risk across partners and vendors. Secure multiparty computation splits inputs across parties and combines partial results to compute a function without revealing each party’s input. Federated learning trains models across distributed datasets so that data remains in local environments while only gradients or parameters move. Synthetic data generation creates artificial datasets with similar statistical properties to the originals, enabling testing, validation and benchmarking without using real personal data. When these PETs are combined with governance and audit controls, they support scale, speed and trust in AI for regulated use cases (OECD, 2025[80]).
Italian authorities could also focus on safe testing capacity and shared learning to explore specific data-sharing practices. Italy’s innovation facilitators can leverage peer lessons to set practical targets. Italian authorities could include specific cohorts in the sandboxes with synthetic data benchmarks for portfolio analytics, market research and post‑trade, and Milano Hub could convene discussion groups with banks, asset managers and FMIs to foster mutual understanding. Authorities could encourage synthetic data baselines for key market functions so firms compare models without sharing real data, and host federated learning pilots for market analysis and post‑trade analytics that build common evaluation methods. Playbooks could document integration patterns for financial workflows, linking them to audit and reporting expectations so firms learn and scale within existing frameworks while raising trust and performance (OECD, 2024[26]). Supervisors can anchor these patterns to proportional principles used in prudential practice, ensuring that the adoption of such practices complements existing model governance and validation expectations.
The promotion of international alignment matters because many Italian financial firms operate across borders. Established data-sharing practices may reduce frictions when data cannot move freely and support stronger competition, faster product cycles and reduced model risk in market‑facing activities (OECD, 2024[26]). As firms adopt privacy‑preserving workflows, operational risks decline and governance improves; customers gain stronger protection and faster services, while staying aligned with EU law and global practice, including practices involving synthetic data and federated learning.
3.2.5. Foster and support public-private co‑operation
Increase AI collaboration between the industry and the public sector (e.g. multi-stakeholder forums, thematic working groups, joint testing frameworks)
Ongoing public-private co‑operation in financial innovation in Italy
Italy’s financial authorities have encouraged dialogue with industry on AI adoption, but structured public – private co‑operation remains limited. Engagements have mostly taken the form of consultations and roundtables rather than permanent frameworks, including the Roundtable on AI in Finance held at BdI in June 2025. There is potential to expand the range of formal public-private collaborative structures, building upon the success of CERTfin (see below), for example by establishing initiatives specifically for AI deployment. Firms consulted during the course of the project expressed strong interest in frameworks that enable joint exploration of AI use in market activities, emphasising trust-building and governance alignment rather than operational testing mechanisms. Such initiatives could take several forms, including multi-stakeholder forums, thematic working groups or be based on offering testing grounds for digital innovation architecture. Moreover, innovation facilitators (e.g. Milano Hub and Fintech Channel) also play an important role in fostering public – private co‑operation by providing a common space for dialogue between public and private stakeholders – as discussed further in section 3.2.6. Ongoing forums on selected topics could be established to facilitate structured exchanges between market operators and authorities, allowing for the discussion of practical experiences, emerging challenges, and regulatory perspectives. The insights emerging from these discussions could contribute to the development of shared standards and best practices among industries.
A notable example of co‑operation already in place is the Italian CERTFin. This initiative brings together financial institutions and public authorities to strengthen cyber resilience through information sharing and co‑ordinated responses to threats (CERTFin, 2025[81]). CERTFin demonstrates how structured collaboration can deliver tangible benefits: improved preparedness, faster incident handling and a common language for risk management. Italy’s national AI strategies also emphasise collaboration between public administrations and private entities, with objectives such as promoting research partnerships and creating favourable conditions for AI value generationv (AGID, 2022[82]; AGID, 2024[83]). While these strategies are economy-wide rather than finance‑specific, they underline the policy commitment to public-private co‑operation, which could inspire sectoral initiatives tailored to financial markets.
Public-private co‑operation initiatives in other jurisdictions
International experience shows that jurisdictions investing in collaborative frameworks for AI reap significant benefits. Multi-stakeholder forums and thematic working groups could align expectations and foster trust. These platforms could help clarify supervisory priorities, reduce uncertainty and accelerate safe innovation.
In the United Kingdom, sustained engagement between regulators and industry has supported the development of principles-based guidance, enabling firms to scale AI responsibly while maintaining market integrity (UK Government, 2023[84]; 2025[85]). The AI Lab of the UK FCA serves as a dedicated hub for exploring artificial intelligence applications in financial services, providing technical guidance, ethical frameworks and collaborative research opportunities (FCA, 2024[86]). Additionally, the FCA has introduced AI Live Testing, a controlled environment that enables firms to validate AI-driven models under regulatory oversight (FCA, 2025[87]).
Similarly, the Monetary Authority of Singapore (MAS) has initiatives that aim to foster digital innovation through public-private collaboration. The API Exchange (APIX) initiative offers an open architecture that connects financial institutions, fintech firms and regulators. APIX facilitates interoperability and accelerates innovation by providing access to curated APIs, developer tools and a secure testing ecosystem (APIX, 2025[88]). Notably, MAS established the Global Finance & Technology Network (GFTN), a consortium that promotes cross-jurisdictional dialogue, with the aim of harmonising approaches to emerging technologies, reducing regulatory fragmentation, and strengthening global financial systems through co‑ordinated experimentation and knowledge sharing (GFTN, 2025[89]). GFTN has developed ALFIN, an AI-driven tool for financial firms’ research and business intelligence needs (GFTN, 2025[90]). The UK’s and Singapore’s initiatives provide examples of governance‑oriented co‑operation frameworks, rather than specific sandbox operations, which could be partially or entirely replicated for the benefit of Italy’s financial sector. Relatedly, detailed sandbox design and access policies are treated in section 3.2.6.
The relevance for AI development of increased joint action between private and public actors
In this sense, policies that nurture co‑operation can create an environment where innovation and oversight evolve together, facilitated by knowledge sharing and standards alignment (OECD, 2026[1]). Close and sustained engagement with the industry can also yield significant benefits for supervised entities, improving the authorities’ understanding of any challenges encountered by supervised firms in their compliance efforts (OECD, 2026[1]). This mutual learning reduces fragmentation, builds resilience, and unlocks productivity gains and operational efficiencies across financial markets (OECD, 2021[35]; 2024[26]; BIS, 2025[69]).
Italian authorities could explore more proactive forms of engagement with industry that go beyond routine supervisory practices. Traditional tools such as on‑site inspections, thematic reviews and systematic data collection already support dialogue, but these could be complemented by deeper interaction in order to spur awareness of market operators regarding specific topics related to responsible innovation (e.g. compliance, market integrity) (OECD, 2026[1]).
Innovative approaches could be further strengthened by scaling up existing practices such as joint testing,22 which could create opportunities for constructive exchanges between firms developing or deploying AI systems and supervisory bodies. Shared evaluation environments (e.g. through controlled experimentation mechanisms) allow supervisors to observe model behaviour under controlled conditions and give firms early feedback on supervisory expectations, while firms benefit from early feedback on compliance and risk expectations. Similarly, dedicated public‑private forums can provide a platform for discussing emerging issues, clarifying standards and promoting accountability. Over time, these initiatives can underpin proportional oversight and contribute to a more resilient and transparent AI governance framework across the financial sector, accelerating safe adoption and reinforcing market confidence (OECD, 2026[1]). Such an enhanced co‑operation should not be understood as binding on NCAs and MSAs in their assessments, nor as replacing or altering the independent exercise of the respective supervisory mandates. Each authority would retain full autonomy in the performance of their statutory tasks, while also benefitting from the information-sharing and resource‑pooling advantages.
Broad policies that encourage structured dialogue, joint experimentation and voluntary codes can help Italy bridge knowledge gaps and strengthen trust without constraining innovation. Over time, these efforts could lead to clearer expectations, more robust governance and a competitive edge for Italian markets, complementing rather than substituting operational instruments like sandboxes.
3.2.6. Highlight and enhance the role of innovation facilitators
Promoting the existing national-level innovation facilitator ecosystem
Innovation facilitators can play an important role in supporting a responsible and safe integration of AI innovations in financial markets, consistent with the OECD AI Principles (OECD, 2024[91]). They foster closer collaboration between market participants and authorities, helping to address regulatory barriers, while sending a positive signal about the commitment to responsible innovation. While section 3.2.5 focusses on strategic co‑operation frameworks, this section addresses arrangements in the form of innovation facilitators.
Italy benefits from a well-developed ecosystem of innovation facilitators spanning all major segments of financial activity, enabling safe testing of AI applications and fostering constructive engagement with the industry. A financial regulatory sandbox (“sandbox”) has been active since 2021, allowing supervised entities and FinTech operators to test innovative products and services for a limited period. The sandbox is operated by BdI, CONSOB and IVASS, under the co‑ordination of the Fintech Committee, set up at the Ministry of Economy and Finance. Participants must demonstrate that a sandbox project: is innovative; requires an exemption from an existing rule, or joint testing or examination in a controlled environment; adds value for end users or enhances existing processes; is at a sufficient level of maturity; and is economically viable.
BdI also established in 2021 the Milano Hub, which offers consulting services, mentorship, and educational components to financial intermediaries, startups, and research centres, to accelerate the development of projects and promote the quality and safety of specific innovations. Milano Hub works via “Calls for Proposals” focussed on specific themes. The selected projects receive developmental support through technical expertise in specific areas, as well as involvement in events with representatives from the projects, institutions and the academic world. Milano Hub has already supported projects on AI in banking, financial and payment services, as well as AI-driven projects related to digital payments.
The FinTech Channel is a contact point for market participants to engage with BdI and propose or present innovative solutions. Entities may also seek informal advice and guidance (e.g. on regulatory or licensing matters) and learn about other relevant support initiatives. BdI does not provide formal recommendations or legal advice via this Channel, but rather it serves as a mechanism to simplify engagement with industry, especially smaller firms and new entrants. The FinTech Channel has also actively supported AI innovation, with 48% of the interactions in 2024 relating to projects that feature AI solutions.
The effectiveness of the facilitators is ensured through the introduction of amendments streamlining the Sandbox. The new version of the ministerial decree under which the Sandbox operates will be published shortly in the Italian Official Journal.
Enhancing the innovation facilitator ecosystem, including encouraging smaller firms to participate, including non-supervised (e.g. FinTech start-ups)
Italy could consider promoting access to high-performance computing resources for the participants in innovation facilitators, as proven useful in other jurisdictions. For example, in 2024 the Hong Kong Monetary Authority (HKMA) and Hong Kong Cyberport Management Company Limited (Cyberport) jointly introduced a GenAI Sandbox, providing a risk-controlled environment to develop, test and pilot innovative AI-based solutions in real-world banking scenarios (HKMA, 2024[92]). The second cohort of Sandbox participants will benefit from access to computing power facilitated by the Cyberport’s AI Supercomputing Centre (HKMA, 2025[93]). Promoting access to such resources for Italian financial sector participants could for example be carried out as part of the European High Performance Computing Joint Undertaking (EuroHPC JU), via an access call or under the AI Factories Industrial Innovation track (EuroHPC JU, 2025[94]), and leveraging the computing power of the Italian AI Factory IT4LIA (IT4LIA, 2025[95]).
Italy could improve data accessibility through the possible sharing of datasets to support safe model testing by financial firms. Participants in the OECD project survey identified challenges related to the availability of quality, AI-compatible data, and the skills and talent needed to develop AI tools as key non-regulatory constraints to AI deployment. Solutions aiming to respond to such challenges have been tested in other OECD jurisdictions, such as Korea. Since 2017, Korea’s AI Hub has provided publicly available, AI-compatible datasets to support AI model development, and to manage risks related to data quality and personal privacy (AI Hub Korea, 2024[96]). Both synthetic and real datasets are provided, in a range of formats and covering various topics. In 2024, the Korean AI Hub released a synthetic financial dataset assembled in partnership with several local data companies and Dong-eui University. Italy could consider taking a similar approach, with financial authorities, academia and the private sector working jointly to identify the types of datasets that would be valuable for testing AI innovations in the financial sector, and develop a framework to collect and provide access to such datasets, for example via the ABI’s AI Hub.
Italy could also expand the provision of technical expertise, training and upskilling in domains related to AI development for financial applications, Milano Hub could strengthen its role by organising more workshops, seminars and masterclasses for the innovation facilitators community on specific relevant themes, in order to foster interactions and debate at the national level. One way to achieve this could be partnering with the private sector and academia, to draw on skillsets not available in the public sector. One example is Malaysia’s AI Sandbox Pilot Programme (“Pilot Programme”), administered by the Malaysian Research Accelerator for Technology and Innovation (MRANTI) in collaboration with the private sector, which provides training, capacity building and technical support for innovators and entrepreneurs (MRANTI Malaysia, 2024[97]). The topic of skills development is discussed further in section 3.2.7 below.
Italy could consider encouraging smaller firms to participate in the innovation facilitators. During the project bilateral meetings, financial firms noted that regulatory sandboxes can play a valuable role in alleviating some of the regulatory burdens for the testing of AI models. To that end, the Milano Hub’s latest Call for Proposal includes an allocation for SMEs. A similar approach could be taken for the Sandbox. Additionally, the Italian authorities could encourage participation of smaller firms through enhancing awareness-raising of the innovation facilitators and their respective benefits and continuing to provide assistance to prospective applicants on meeting eligibility criteria for participation. Smaller firms may also benefit from networking sessions, where established firms may provide informal advice to new market entrants related to points of interest raised by the firms. Italian authorities could also consider establishing a dialogue with AI research centres (e.g. AI factories) to address AI capabilities gaps for SMEs.
Better integration of the EU and national level innovation facilitator frameworks
Italian authorities could also consider the implementation of the EU AI Act as a strategic opportunity to improve the integration between the national-level innovation facilitators and the EU-level initiatives. More specifically, Article 57 of the AI Act requires national competent authorities to establish an AI regulatory sandbox either at the national level or jointly with other member states (European Union, 2024[55]). The Italian authorities are currently evaluating whether the existing sandbox meets the requirement of the AI Act. Increasing co‑ordination between the Italian and EU levels can enhance the effectiveness of innovation facilitators, encourage more market participation and promote competitiveness across Europe.
The EC has published a draft implementing act for the establishment, development, implementation, operation and supervision of AI regulatory sandboxes, consistent with Article 57 of the AI Act (European Commission, 2025[98]). The draft implementing act features detailed common rules on the participation in AI regulatory sandboxes, which is to prioritise SMEs, free of charge. It also promotes the set-up of joint AI regulatory sandboxes, through appropriate framework agreements, for example a memorandum of understanding.
Italy could consider establishing a centralised, finance‑sector-specific AI sandbox, building upon the existing successful arrangements. The draft implementing act on AI regulatory sandboxes encourages the establishment of sector-specific sandboxes at different levels, particularly for areas of strategic importance and in cases of notable regulatory implementation challenges (European Commission, 2025[98]). Italian authorities could evaluate whether the finance sector would warrant a sector-specific AI sandbox.
Italy could consider formalising its engagement with financial authorities in other EU member states for information exchange on innovation facilitators. Informal exchanges are already in place and may gradually evolve into a more formalised structure. The Italian authorities may consider participating in EU-level AI innovation facilitator initiatives that will be promoted within the EU framework. Italy could continue to enhance co‑operation agreements with partner institutions, in the form of information sharing agreements, MoUs and reciprocal arrangements. The Milano Hub has already signed MoUs with Le Lab – Banque de France and is strengthening its collaboration with the Central Bank of Ireland, to facilitate activities aimed at supporting market innovation.
At a more ambitious level, Italian authorities could contribute to any EU-wide efforts to support cross-border testing. The cross-border sandbox could allow firms to test AI innovations under the supervision of a national financial authority, while also benefitting from input or review of authorities in other jurisdictions where the firm plans to operate. This could contribute to lowering the perceived barriers to cross-border activity and help to identify any inconsistencies in regulatory or supervisory approaches across different countries. Such cross-border sandboxes have been introduced in other jurisdictions. For example, in 2023, the People’s Bank of China, the Hong Kong Monetary Authority and the Monetary Authority of Macau signed an MoU to integrate their respective regulatory sandboxes, allowing FinTech companies to test innovations that span the borders of the three jurisdictions (HKMA, 2024[99]). Italy could play a leading role in this respect, leveraging the experience of operating its Sandbox across the three national financial authorities.
The implementing act on AI regulatory sandboxes also encourages EU Member States to involve other actors in the process, such as national or European standardisation organisations, research and experimentation labs and relevant stakeholder and civil society organisations, which could be further considered in the Italian context (European Commission, 2025[98]).
3.2.7. Support whole-of-government public sector strategic direction for the AI development and use in the finance sector
Fostering stronger collaboration across industry, academia and authorities
There is a strong case for whole‑of-government public sector intervention to encourage greater collaboration between the public sector, the financial industry and academia. Among the non-regulatory constraints highlighted by the industry in the project meetings, concerns related to skills gaps emerge as a significant category. This section highlights areas where collaboration could build on existing initiatives and have a strong impact on enabling AI deployment in Italian financial markets.
Assist in upskilling staff and in practical AI deployment
Supporting dedicated research, reskilling and upskilling initiatives in co‑operation with the financial industry can help to address the reported skills gaps. As mentioned in section 3.2.6, AI development and deployment rely on a broad range of skillsets, at the intersection between computer programming, database management and statistics, as well as ethics and compliance functions (OECD, 2023[100]).
Italian financial firms report that they face skills gaps, both at the managerial level regarding governance approaches to AI (see also section 3.2.3 above) and among staff using and developing AI systems. Firms are taking different approaches to address this, for example through internal sandboxes, AI labs and dedicated AI teams, and through training programmes. However, there are opportunities for synergy, drawing on the different strengths and resources of firms, research institutes and the public sector.
Italy can build on the initiatives proposed in its Italian Strategy for Artificial Intelligence 2024–2026 (“AI Strategy”) (AGID, 2024[83]). The AI Strategy calls for the development of specialised technical courses, such as at the postgraduate level, to train researchers as future promoters of AI adoption. It also prioritises re‑skilling and up-skilling programmes for managers and technicians operating AI. These objectives should be pursued in co‑operation with the financial industry to ensure relevance to sector‑specific applications.
Leveraging existing centres of excellence and AI factories
Fostering collaboration among industry, academia, and supervisory authorities can support upskilling and practical AI deployment, as considered in section 3.2.8. Access to resources for such upskilling may be facilitated by leveraging existing centres of excellence and AI factories, tailored to the specifics of the finance sector.
One model proposed in the AI Strategy is “industry-specific Academies” that bring together training bodies, trade associations, and medium-large companies to deliver reskilling and upskilling courses for workers of participating companies and their suppliers. This approach aims to pool resources to build high-quality training that can benefit a whole industry, and to more effectively attract talent into the industry. Italy’s financial sector is a strong candidate for such an approach, given the available financial resources and active industry bodies.
For example, the ABI Lab, as a research centre bringing together Italian banks, IT providers and digital experts, could serve as a basis for an “AI in Finance Academy”, drawing on its existing resources and networks. The AI4I, founded by the Italian Government to perform transformative application-oriented research in Artificial Intelligence, could also expand the services it provides to include in-depth and systematic training courses, including hands-on experimentation with AI using the on-premise HPC cluster and the Leonardo EuroHPC supercomputer located in the Bologna Technopole (AI4I, 2024[101]).
Another example of a private sector-driven AI centre of excellence is the Agorai Innovation Hub, inaugurated in April 2025 (Generali, 2025[102]). Agorai brings together private and public sector bodies alongside academic institutions, to promote applied research, support the development of startups and provide training for firms to upskill in AI domains.
Availability of such centres of excellence and supercomputer resources may be particularly useful for smaller entities that lack access to advanced infrastructure necessary for AI model development. Italian authorities may consider ways of involving private entities in the research and experimentation efforts, for example by signing MoU for this purpose.
Upskilling is also facilitated by academic research and collaborations. In January 2026, BdI launched a collaboration with the Central Bank of Ireland for the Innovation Data Challenge 2026, a joint initiative designed to foster research and innovation in the retail payments sector. This challenge, involving multiple leading universities, will allow students to work with synthetic and real-world financial datasets, thereby promoting innovation while adhering to the data protection standards (BdI, 2026[8]). Such initiatives promote responsible innovation, while supporting wider upskilling and helping to identify AI talent. Such hackathons could be based, for example, on AI-based SupTech tools.
Support testing and development of compliant open-source models
Official sector support for the development of compliant open-source or open‑weight models23 by academia and private actors could strengthen Italy’s AI ecosystem, especially for firms with smaller budgets that are unable to develop in‑house models. While the authorities should not be required to introduce these specific models, consideration could be given to analysing how the models operate and whether they would provide cost‑effective alternatives for firms unable to build proprietary solutions, reducing barriers to entry and promoting competition in the domestic context.
Drawing on the availability of an innovation-friendly ecosystem, Italian authorities can contribute to the testing and development of open-source AI models that can be used by financial market participants, drawing on examples from other jurisdictions, such as Switzerland, and combining transparency, compliance and technical robustness.24
Italy’s advanced IT infrastructure, including high‑performance computing resources, offers a strong basis for such initiatives. Public sector involvement should focus on enabling collaboration and providing access to infrastructure, rather than leading development directly. Italian‑developed AI models could ensure compliance with domestic and EU laws, while also reflecting social and cultural preferences, which is also described as a priority in the AI Strategy (AGID, 2024[83]).
3.2.8. Strengthen supervisory capacity
Enhance capacity for authorities at the national and EU level
The need to equip financial supervisors with the right tools and skills for effective AI oversight in finance is widely acknowledged (OECD and FSB, 2024[9]). Likewise, increased capacity and upskilling of financial supervisors will be necessary to achieve monitoring and oversight objectives, as well as to enable authorities to develop and deploy AI as part of the supervisory activity. A proportionate, risk‑based approach should also guide the supervisory authorities themselves when deploying AI‑enabled SupTech tools or other AI systems for supervisory purposes. This would reinforce consistency and credibility in the supervisory approach. Each national competent authority should maintain adequate staffing levels, with personnel trained in the latest AI disciplines, while also allowing participation in international capacity-building activities (OECD, 2026[1]).
All three Italian financial authorities have been designated as MSAs under the Italian Law No. 132/2025 implementing the EU AI Act. Under the EU Regulation, MSAs are entrusted with monitoring the compliance of AI systems with the law and are responsible for reporting on their supervisory activities at the EU level. In this context, ongoing strengthening supervisory capacity is necessary to ensure adequate oversight of AI use in finance, in line with the EU framework. The designation as MSAs also demands effective cross-border co‑operation at the EU level, as well as effective co‑ordination with non-financial authorities, as discussed in section 3.2.2.
Attracting and retaining staff with AI-related skills is a challenge not only for Italian financial sector firms, as reported in the project survey, but also for Italian financial authorities. Sufficient resources are required to effectively oversee and continuously monitor the evolution of AI deployment in finance. Italian authorities should consider further investment in attracting talent with expertise in AI-related topics, as well as in continuous training and upskilling of existing teams to allow them to combine their domain-specific expertise with a deeper technical understanding of AI systems (OECD, 2026[1]). The achievement of this objective will depend on the availability of adequate resources to upskill the current workforce and to onboard new skilled staff, particularly in AI and data science, as well as to create synergies between innovative knowledge and legacy expertise.
The EBA, ESMA, EIOPA and SSM offer a range of programmes, courses, workshops and other initiatives to promote upskilling for national and EU supervisors on AI in the financial sector. Box 3.2 provides an indicative list of such offers. While the breadth of material and activities on offer are positive, there is potential to extend accessibility, consolidate these programmes and provide coherent training and upskilling pathways.
In general, most efficient programmes tend to focus both on technical skills and knowledge of AI technologies, and on broader skills needed to effectively operate them. The level of technical upskilling provided should be tailored to the different profiles and responsibilities of supervisory staff, while ensuring a minimum baseline understanding of the unique features of AI technologies (OECD, 2026[1]).
Given the fast-evolving nature of AI technology and the rapid cycles of innovation involved, supervisory staff should have access to continuous training and capacity-building activities, rather than ad hoc or one‑off initiatives (OECD, 2026[1]). A sustained approach to upskilling will help supervisors face the constant challenge of keeping their knowledge, skills and oversight frameworks up to date. The OECD in co‑operation with the European Commission – SG REFORM and Bank of Italy organised a roundtable event at the premises of BdI on 12 and 13 June 2025, which received very positive feedback regarding the role of information-sharing facilitated by the initiative, notably across the authorities and jurisdictions.25
The EU Supervisory Digital Finance Academy (EU-SDFA, see Box 3.2) appears to be an appropriate candidate for a consolidated EU-level platform for continuous upskilling in digital financial innovation domains. All four Italian financial authorities are partners of EU-SDFA. European authorities could consider continuing this initiative beyond the end of the Technical Support Instrument cycle under which it is currently funded or linking these initiatives with some form of public-private co‑operation, as discussed in section 3.2.5.
Italian authorities should support continuous upskilling in AI and other digital financial innovation domains by leveraging on EU innovative platforms, such as the EU-SDFA. Efforts could be made towards the development of a structured competency framework and training curriculum. The co‑operation model with EU platforms and academia should be defined and mapped with relevant sustainable funding mechanisms. Authorities may also consider establishing measurable indicators for supervisory capability enhancement.
Where relevant, it may also be valuable to involve authorities in other policy domains (OECD and FSB, 2024[9]). For example, the European and national-level competition and DPAs can bring complementary perspectives, to understand the impacts of AI on markets and consumers. EU-wide development and access to SupTech tools, common platforms, and co‑ordinated training at both national and EU levels are encouraged and can be further supported by public-private partnerships (see section 3.2.5.).
Box 3.2. Existing upskilling and capacity-building initiatives for supervisors across Europe
Copy link to Box 3.2. Existing upskilling and capacity-building initiatives for supervisors across EuropeThis box provides an indicative list of recent initiatives at the EU and global level to provide training, capacity building and experience‑sharing for European financial supervisors on AI technologies.
EU Supervisory Digital Finance Academy training on AI in finance
The EU-SDFA was established through the technical support instrument (TSI) by the European Commission – SG Reform, in co‑operation with the EBA, ESMA, EIOPA and the Florence School of Banking and Finance. It provides comprehensive training cycles and workshops to support upskilling, knowledge sharing and peer-to-peer exchanges within the financial supervisory community in Europe. EU-SDFA provides a range of AI-related courses and activities, including:
Introduction to AI – an online course that introduces AI in the financial sector. The course begins with foundational AI concepts and traces the recent evolution into Generative AI. It then examines critical AI risks and explores the fundamentals of the AI Act.
Supervising and Regulating AI in the Financial Sector – a forthcoming (2026) residential course providing a comprehensive foundation of the technical, regulatory and supervisory aspects of AI in the financial sector. The programme will cover the full AI model lifecycle, emerging market applications in banking, insurance and securities, as well as the evolving risk landscape shaped by advanced algorithms and autonomous agents. The course aims to equip supervisors and practitioners with the tools needed to evaluate AI model risks, ensure fairness and transparency, and adapt governance frameworks to a rapidly transforming digital ecosystem.
ECB Supervision Innovators Conference – AI in action: Shaping the future of banking and banking supervision – This conference brought together leading global supervision innovators and banking representatives to foster collaboration and explore the latest trends and developments in artificial intelligence and innovation.
UNESCO project: Supervising AI by Competent Authorities
The project aims to equip European national authorities with tools, knowledge and peer support to supervise AI systems effectively. The project’s capacity-building programme includes national-level training sessions across the EU, with sessions in 2025 delivered in 12 EU member states reaching over 700 civil servants. The project supports the broader implementation of the 2021 UNESCO Recommendation on the Ethics of AI.
EIOPA workshop: Artificial Intelligence Supervision
The aim of the event was to discuss with competent national authorities the supervision of AI and its impact on consumers. The workshop was held in April 2024.
EU Academy course: Introduction to Artificial Intelligence for Public Service Interoperability
This course introduces the fundamentals of AI for interoperable public services, covering the concept of Artificial Intelligence, and its components, the legal and policy contexts, methods to support interoperability through AI, challenges and example applications. The course is provided online.
Note: This box provides an indicative list of initiatives and programmes available to European supervisors. It is not a comprehensive list.
Source: EU Supervisory Digital Finance Academy (2025[103]), Creating a common European culture of digital finance supervision, https://eusdfa.eui.eu/; ECB (2025[104]), Supervision Innovators Conference 2025, https://www.bankingsupervision.europa.eu/press/conferences/html/20250924_Supervision_innovators_conference.en.html; UNESCO (2025[105]), Expanding Capacity Building for Competent Authorities on AI: National Trainings Across the EU, https://www.unesco.org/en/articles/expanding-capacity-building-competent-authorities-ai-national-trainings-across-eu; EIOPA (2024[106]), EIOPA Artificial Intelligence Supervision workshop, https://www.eiopa.europa.eu/media/events/eiopa-artificial-intelligence-supervision-workshop-2024-04-24_en EU Academy (2025[107]), Introduction to Artificial Intelligence for Public Service Interoperability, https://academy.europa.eu/courses/introduction-to-artificial-intelligence-for-public-service-interoperability.
Enhanced sharing of AI-driven SupTech tools at the EU level
SupTech tools leveraging AI can also play a valuable role in supporting supervisory tasks, bringing benefits such as automation, enhanced analytics and greater responsiveness to emerging risks. Deployment of SupTech tools can also act as a signal that authorities encourage responsible applications of AI that enhance productivity and contribute to market stability.
SupTech tools are already being widely deployed by Italian financial authorities and by other national authorities at the EU level (see section 2.2.2). The Italian authorities are using and experimenting with a range of SupTech tools, as detailed in Table 2.2.
At the EU level, European supervisory authorities should consider strengthening co‑ordinated efforts to enable the strategic pooling of expertise and institutional capacity, particularly when it comes to AI-based SupTech tools. The development or acquisition of SupTech applications involving AI can necessitate significant financial investment, robust technological infrastructure and specialised internal expertise (OECD, 2026[1]).
Joint engagements at the cross-border level for the development and sharing of SupTech solutions could be facilitated using common platforms and co‑ordinated training initiatives. Establishing common curricula can also support convergence of the supervisory practices and approaches across EU member states, which can benefit market participants by increasing certainty and consistency of treatment across jurisdictions. Sharing of multi-jurisdictional SupTech tools helps to avoid duplication of efforts at the national level, while allowing national authorities to learn lessons and adopt best practices from other jurisdictions.
Sharing AI algorithms and libraries can be considered as a method to reach a productive state faster than pre‑trained models or complete applications, particularly for validation purposes. Algorithms can be adapted to national technical, legal and linguistic requirements, avoiding delays linked to operationalising pre‑trained models. This approach also reflects the need for rapid development cycles, given the extremely fast evolution of AI models and their integration into supervisory systems. An appropriate collaboration model should be identified for facilitating public-private partnerships in this domain, as explored in section 3.2.5.
Co‑ordination and sharing of SupTech tools could be effectively achieved at EU level. Box 3.3 provides examples of existing initiatives led by the ECB and Bank for International Settlements (BIS) Innovation Hub Nordic Centre, deploying AI-based tools to support financial supervisory activities. The development and maintenance of tools like the ESMA Data Platform code repository may also be considered to promote innovation while ensuring any emergent risks remain manageable through appropriate governance and oversight frameworks. Strengthening supervisory capacity may also be supported by leveraging AI for supervisory stress testing and by using AI‑based detection tools to assess the extent of automation involved in producing critical documentation.
Box 3.3. Examples of current SupTech tools and experiments in Europe
Copy link to Box 3.3. Examples of current SupTech tools and experiments in EuropeECB SupTech Tools
The ECB is actively integrating digital technologies into its supervisory activities, including AI innovations. In 2020 the ECB established a division dedicated to technology and innovation within banking supervision, and in 2024, it issued guidance to encourage Joint Supervisory Teams to experiment with AI and identify practical tools to support day-to-day supervision. Prominent tools that feature AI, or support deployment of AI tools at the national level, include:
Virtual Lab, a platform for SSM-wide digital collaboration, including code sharing, cloud computing and the development of generative AI capabilities.
Athena, an NLP-based textual analysis platform available to all supervisory areas.
Agora, a centralised data platform for all prudential data, available to all SSM users.
Navi, a graph and network analytics platform with advanced visualisation capabilities.
Heimdall, a machine‑reading tool to support analysis of thousands of fit and proper applications.
Gabi, a specialised model development platform for big data analytics.
Delphi, which supports early detection of emerging risks for SSM banks and for the banking sector overall, by integrating market indicators and social media information into a single web-based dashboard using NLP.
Medusa, providing a one‑stop-shop for inspectors and supervisors to access relevant documents easily, using smart search and reporting functionalities as well as visualisations and statistical analyses.
The ECB is now working towards a “single supervisory cockpit”, providing an integrated view of indicators and unstructured insights – dashboards, documents, AI assistants – with explainable flags and transparent workflows.
Bank for International Settlements (BIS) Innovation Hub Nordic Centre – Project Aurora
In 2023, the BIS Innovation Hub Nordic Centre concluded phase 1 of a proof of concept focussed on combating money laundering through the application of privacy-enhancing technologies, ML and network analysis in collaborative analysis and learning (CAL) approaches (Project Aurora). The project demonstrated that using payments data in combination with privacy-enhancing technologies, ML models and network analysis can help anti-money-laundering authorities improve the detection of complex money laundering schemes. A second phase of Project Aurora will focus on PETs and their potential to support anti-money laundering through more effective and safe information-sharing.
Note: This box provides an indicative list of tools and experiments deployed across European authorities and Europe‑based institutions. It is not a comprehensive list.
Source: Machado (2025[108]), Artificial intelligence and supervision: innovation with caution, https://www.bankingsupervision.europa.eu/press/speeches/date/2025/html/ssm.sp251014~5bc6e60334.en.html; McCaul (2024[109]), SSM digitalisation: from exploration to full-scale adoption, https://www.bankingsupervision.europa.eu/press/speeches/date/2024/html/ssm.sp240612_1~a3ace1ed8e.en.pdf; BIS Innovation Hub Nordic Centre (2025[110]), Project Aurora: the power of data, technology and collaboration to combat money laundering across institutions and borders, https://www.bis.org/about/bisih/topics/fmis/aurora.htm; BIS Innovation Hub Nordic Centre (2025[111]), Project Aurora Phase 2: Open call – case studies of the use of privacy enhancing technology in multi-party collaborative analytics to tackle money laundering, fraud and other financial crime, https://www.bis.org/about/bisih/topics/fmis/aurora/open_call.htm.
References
[83] AGID (2024), Italian Strategy for Artificial Intelligence 2024-2026, https://www.agid.gov.it/sites/agid/files/2024-07/Italian_strategy_for_artificial_intelligence_2024-2026.pdf.
[82] AGID (2022), National Strategic Programme on Artificial Intelligence | Strategic Programme on Artificial Intelligence 2022 - 2024, https://docs.italia.it/italia/mid/programma-strategico-nazionale-per-intelligenza-artificiale-en-docs/en/bozza/index.html (accessed on 8 January 2025).
[96] AI Hub Korea (2024), Financial Synthetic Data, https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=data&dataSetSn=71792 (accessed on 17 April 2025).
[101] AI4I (2024), The Institute, https://ai4i.it/the-institute/ (accessed on 18 December 2025).
[88] APIX (2025), About us, https://apixplatform.com/about-us (accessed on 12 December 2025).
[66] Banco Central do Brasil (2025), Open Finance, https://www.bcb.gov.br/en/financialstability/open_finance (accessed on 26 August 2025).
[52] Bank of England (2025), “SS1/23 – Model risk management principles for banks”, https://www.bankofengland.co.uk/prudential-regulation/publication/2023/may/model-risk-management-principles-for-banks-ss (accessed on 16 December 2025).
[22] Bank of England and FCA (2022), DP5/22 - Artificial Intelligence and Machine Learning | Bank of England, https://www.bankofengland.co.uk/prudential-regulation/publication/2022/october/artificial-intelligence (accessed on 23 April 2024).
[20] Bank of Japan (2025), Financial System Report - Use and Risk Management of Generative AI by Japanese Financial Institutions - 日本銀行 Bank of Japan, https://www.boj.or.jp/en/research/brp/fsr/fsrb250930.htm (accessed on 27 October 2025).
[8] BdI (2026), Banca d’Italia e Central Bank of Ireland avviano la prima Innovation Data Challenge, https://www.bancaditalia.it/media/comunicati/documenti/2026-01/cs-Innovation-Data-Challenge-16012026-ITA.pdf (accessed on 26 January 2026).
[11] BdI (2025), Fintech survey, https://www.bancaditalia.it/pubblicazioni/indagine-fintech/ (accessed on 26 January 2026).
[12] BdI (2022), No. 682 - The digital transformation in the Italian banking sector, https://www.bancaditalia.it/pubblicazioni/qef/2022-0682/index.html?com.dotmarketing.htmlpage.language=1&dotcache=refresh&dotcache=refresh.
[69] BIS (2025), The use of artificial intelligence for policy purposes, https://www.bis.org/publ/othp100.htm (accessed on 9 December 2025).
[111] BIS Innovation Hub Nordic Centre (2025), Project Aurora Phase 2: Open call - case studies of the use of privacy enhancing technology in multi-party collaborative analytics to tackle money laundering, fraud and other financial crime, https://www.bis.org/about/bisih/topics/fmis/aurora/open_call.htm (accessed on 18 December 2025).
[110] BIS Innovation Hub Nordic Centre (2025), Project Aurora: the power of data, technology and collaboration to combat money laundering across institutions and borders, https://www.bis.org/about/bisih/topics/fmis/aurora.htm (accessed on 15 December 2025).
[64] Borges, G. et al. (2024), Implementation and Challenges of Open Finance in Brazil, FGV Law, https://direitorio.fgv.br/sites/default/files/arquivos/direito_rio_livro_neasf_18_eng_ap5.pdf (accessed on 12 December 2025).
[79] Borowicz, M. (2024), “The data quality problem (in the European Financial Data Space)”, International Journal of Law and Information Technology, Vol. 32/1, https://doi.org/10.1093/IJLIT/EAAE015.
[67] CCAF and ADBI (2025), The APAC State of Open Banking and Open Finance, https://www.jbs.cam.ac.uk/faculty-research/centres/alternative-finance/publications/the-apac-state-of-open-banking-and-open-finance/ (accessed on 12 December 2025).
[81] CERTFin (2025), CERTFin about us, https://www.certfin.it/about-us/ (accessed on 10 December 2025).
[14] CIPA (2025), CIPA Rilevazione Economica, https://www.cipa.it/rilevazioni/economiche/index.html (accessed on 11 February 2026).
[13] CIPA (2024), Rilevazione sull’IT nel settore bancario italiano - Profili tecnologici e di sicurezza, anno 2023.
[15] CONSOB (2022), Artificial intelligence in the asset and wealth management, https://www.consob.it/web/consob-and-its-activities/ft9en (accessed on 6 January 2025).
[70] Crisanto, J. et al. (2024), Regulating AI in the financial sector: recent developments and main challenges, BIS, https://www.bis.org/fsi/publ/insights63.htm (accessed on 10 December 2025).
[4] Data Spaces Support Centre (2025), Data Spaces Support Centre, https://dssc.eu/ (accessed on 15 December 2025).
[68] DBS (2020), DBS leverages open banking (SGFinDex) and AI to intensify focus on financial planning inclusion nationwide, https://www.dbs.com/newsroom/DBS_leverages_open_banking_SGFinDex_and_AI_to_intensify_focus_on_financial_planning_inclusion_nationwide (accessed on 12 December 2025).
[33] EBA (2025), “AI Act: implications for the EU banking and payments sector”.
[24] EBA (2021), The EBA publishes follow-up Report on the use of machine learning for internal ratings-based models | European Banking Authority, https://www.eba.europa.eu/publications-and-media/press-releases/eba-publishes-follow-report-use-machine-learning-internal (accessed on 25 April 2024).
[30] EBA (2020), Guidelines on loan origination and monitoring, https://www.eba.europa.eu/sites/default/files/document_library/Publications/Guidelines/2020/Guidelines%20on%20loan%20origination%20and%20monitoring/884283/EBA%20GL%202020%2006%20Final%20Report%20on%20GL%20on%20loan%20origination%20and%20monitoring.pdf (accessed on 20 May 2025).
[104] ECB (2025), Supervision Innovators Conference 2025, https://www.bankingsupervision.europa.eu/press/conferences/html/20250924_Supervision_innovators_conference.en.html (accessed on 15 December 2025).
[47] ECB (2024), The rise of artificial intelligence: benefits and risks for financial stability, Financial Stability Review, https://www.ecb.europa.eu/press/financial-stability-publications/fsr/special/html/ecb.fsrart202405_02~58c3ce5246.en.html (accessed on 9 January 2025).
[38] EDPB (2024), Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models, https://www.edpb.europa.eu/system/files/2024-12/edpb_opinion_202428_ai-models_en.pdf (accessed on 12 December 2025).
[32] EIOPA (2025), Opinion on Artificial Intelligence governance and risk management, https://www.eiopa.europa.eu/publications/opinion-artificial-intelligence-governance-and-risk-management_en (accessed on 3 October 2025).
[106] EIOPA (2024), EIOPA Artificial Intelligence Supervision workshop - European Insurance and Occupational Pensions Authority, https://www.eiopa.europa.eu/media/events/eiopa-artificial-intelligence-supervision-workshop-2024-04-24_en (accessed on 15 December 2025).
[25] EIOPA (2024), EIOPA’s Report on the digitalisation of the European insurance sector, https://www.eiopa.europa.eu/publications/eiopas-report-digitalisation-european-insurance-sector_en (accessed on 3 October 2025).
[31] EIOPA (2021), “Artificial Intelligence governance principles: Towards ethical and trustworthy Artificial Intelligence in the European insurance sector”, https://www.eiopa.europa.eu/document/download/30f4502b-3fe9-4fad-b2a3-aa66ea41e863_en?filename=Artificial%20intelligence%20governance%20principles.pdf (accessed on 20 May 2025).
[16] ESMA (2026), “AI adoption and trends in securities markets: EU evidence”, https://www.esma.europa.eu/sites/default/files/2026-02/ESMA50-481369926-30599_TRV_Risk_Analysis_AI_adoption_and_trends_in_securities_markets.pdf (accessed on 23 February 2026).
[23] ESMA (2025), Artificial intelligence in EU investment funds: adoption, strategies and portfolio exposures, https://www.esma.europa.eu/sites/default/files/2025-02/ESMA50-43599798-9923_TRV_Article_Artificial_intelligence_in_EU_investment_funds.pdf.
[29] ESMA (2025), Warning on the use of AI for investing, https://www.esma.europa.eu/sites/default/files/2025-03/ESMA_Warning_on_the_use_of_AI_-_EN.pdf (accessed on 12 May 2025).
[28] ESMA (2024), Public Statement on the use of Artificial Intelligence (AI) in the provision of retail investment services, https://www.esma.europa.eu/sites/default/files/2024-05/ESMA35-335435667-5924__Public_Statement_on_AI_and_investment_services.pdf (accessed on 23 January 2025).
[27] ESMA (2018), Guidelines on certain aspects of the MiFID II suitability requirements, https://www.esma.europa.eu/sites/default/files/library/esma35-43-869-_fr_on_guidelines_on_suitability.pdf (accessed on 20 May 2025).
[36] EU (2025), Digital Omnibus Regulation Proposal, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52025PC0837 (accessed on 11 March 2026).
[61] EU (2022), Regulation (EU) 2022/2554 of the European Parliament and of the Council of 14 December 2022 on digital operational resilience for the financial sector and amending Regulations (EC) No 1060/2009, (EU) No 648/2012, (EU) No 600/2014, (EU) No 909/2014 and (EU) 2016/1011 [DORA Regulation], https://eur-lex.europa.eu/eli/reg/2022/2554/oj (accessed on 20 May 2025).
[107] EU Academy (2025), Introduction to Artificial Intelligence for Public Service Interoperability, https://academy.europa.eu/courses/introduction-to-artificial-intelligence-for-public-service-interoperability (accessed on 15 December 2025).
[94] EuroHPC JU (2025), AI Factories Access Modes - The European High Performance Computing Joint Undertaking (EuroHPC JU), https://www.eurohpc-ju.europa.eu/ai-factories/ai-factories-access-modes_en (accessed on 10 December 2025).
[98] European Commission (2025), Commission seeks feedback on draft implementing act to establish AI regulatory sandboxes under the AI Act, https://digital-strategy.ec.europa.eu/en/consultations/commission-seeks-feedback-draft-implementing-act-establish-ai-regulatory-sandboxes-under-ai-act (accessed on 12 January 2026).
[73] European Commission (2025), Commission unveils Savings and Investments Union strategy to enhance financial opportunities, https://ec.europa.eu/commission/presscorner/detail/en/ip_25_802 (accessed on 9 December 2025).
[5] European Commission (2025), Common European data spaces, https://digital-strategy.ec.europa.eu/en/policies/data-spaces (accessed on 15 December 2025).
[60] European Commission (2025), Communication on a Financial Literacy Strategy for the EU, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52025DC0681 (accessed on 23 March 2026).
[75] European Commission (2025), Data Spaces | Interoperable Europe Portal, https://interoperable-europe.ec.europa.eu/collection/semic-support-centre/data-spaces (accessed on 15 December 2025).
[6] European Commission (2025), European Data Union Strategy, https://digital-strategy.ec.europa.eu/en/policies/data-union (accessed on 15 December 2025).
[74] European Commission (2025), Savings and Investments Union: A Strategy to Foster Citizens’ Wealth and Economic Competitiveness in the EU, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52025DC0124 (accessed on 9 December 2025).
[76] European Commission (2024), Simpl: Cloud-to-edge federations empowering EU data spaces, https://digital-strategy.ec.europa.eu/en/policies/simpl (accessed on 15 December 2025).
[71] European Commission (2023), Framework for Financial Data Access and amending Regulations (EU) No 1093/2010, (EU) No 1094/2010, (EU) No 1095/2010 and (EU) 2022/2554, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:52023PC0360 (accessed on 9 December 2025).
[72] European Council (2025), Savings and investments union, https://www.consilium.europa.eu/en/policies/savings-and-investments-union-siu/ (accessed on 9 December 2025).
[55] European Union (2024), Regulation (EU) 2024/1689 (AI Act), https://eur-lex.europa.eu/eli/reg/2024/1689/oj (accessed on 28 November 2024).
[103] EU-SDFA (2025), Creating a common European culture of digital finance supervision, https://eusdfa.eui.eu/ (accessed on 15 December 2025).
[87] FCA (2025), FS25/5 - Summary of Feedback Received on the Engagement Paper proposing AI Live Testing, https://www.fca.org.uk/publication/feedback/fs25-5.pdf (accessed on 12 December 2025).
[86] FCA (2024), AI Lab, https://www.fca.org.uk/firms/innovation/ai-lab (accessed on 12 December 2025).
[53] Federal Reserve (2025), The Fed - Supervisory Letter SR 11-7 on guidance on Model Risk Management -- April 4, 2011, https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm (accessed on 16 December 2025).
[51] Financial Services Agency of Japan (2021), FSA publishes an English translation of Principles for Model Risk Management, https://www.fsa.go.jp/en/news/2021/2021112en.html (accessed on 16 December 2025).
[21] Finansinspektionen (2024), AI in the Swedish financial sector, https://www.fi.se/contentassets/084ebc13d6364a28a87a37c9a557ec9c/report-ai-swedish-financial-sector.pdf (accessed on 15 September 2025).
[19] FIN-FSA (2025), The use of artificial intelligence by financial sector actors, https://www.finanssivalvonta.fi/globalassets/en/publications/supervision-releases/2025/the-use-of-artificial-intelligence-by-financial-sector-actors_en_.pdf (accessed on 15 September 2025).
[18] FINMA (2025), FINMA survey: artificial intelligence gaining traction at Swiss financial institutions, https://www.finma.ch/en/news/2025/04/20250424-mm-umfrage-ki/#:~:text=Around%2050%25%20of%20institutions%20have,management%20and%20enterprise%20risk%20management. (accessed on 9 September 2025).
[63] FSB (2025), FSB finalises the common Format for Incident Reporting Exchange (FIRE) - Financial Stability Board, Financial Stability Board, https://www.fsb.org/2025/04/format-for-incident-reporting-exchange-fire-final-report/ (accessed on 27 January 2026).
[10] FSB (2025), “Monitoring Adoption of Artificial Intelligence and Related Vulnerabilities in the Financial Sector”, http://www.fsb.org/emailalert (accessed on 27 October 2025).
[45] FSB (2024), The Financial Stability Implications of Artificial Intelligence, FSB, Washington, DC., https://www.fsb.org/uploads/P14112024.pdf (accessed on 1 December 2024).
[62] G7 (2025), G7 Cyber Expert Group on Artificial Intelligence and Cybersecurity, https://home.treasury.gov/system/files/136/G7-Cyber-Expert-Group-Statement-AI-and-Cybersecurity-2025.pdf (accessed on 3 October 2025).
[77] Gaia-X (2025), Gaia-X Hub Italia, https://www.gaiax-italia.eu/ (accessed on 15 December 2025).
[7] Gaia-X (2023), About - Gaia-X: A Federated Secure Data Infrastructure, https://gaia-x.eu/about/ (accessed on 15 December 2025).
[39] Garante (2024), Press room - Garante privacy en - Garante Privacy, https://www.garanteprivacy.it/web/garante-privacy-en/press-room (accessed on 8 January 2025).
[44] GARP (2025), Five Pillars of Generative AI Governance in Financial Services, https://www.garp.org/risk-intelligence/culture-governance/five-pillars-generative-ai-250103 (accessed on 11 December 2025).
[102] Generali (2025), Agorai Innovation Hub - Generali Group, https://www.generali.com/agorainnovationhub#partner (accessed on 26 January 2026).
[90] GFTN (2025), Meet ALFIN, https://gftn.co/alfin/ (accessed on 12 December 2025).
[89] GFTN (2025), Who We Are, https://gftn.co/who-we-are/ (accessed on 12 December 2025).
[93] HKMA (2025), HKMA announces second cohort of GenA.I. Sandbox to advance responsible AI innovation, https://www.info.gov.hk/gia/general/202510/15/P2025101500258.htm (accessed on 10 December 2025).
[99] HKMA (2024), “Expansion of Greater Bay Area Fintech Pilot Trial Facility”, https://cdn.amcm.gov.mo/uploads/attachment/2024-02/ch_av_614_mc006.pdf (accessed on 18 April 2025).
[92] HKMA (2024), HKMA and Cyberport Launch GenA.I. Sandbox to Bolster A.I. Adoption in Financial Sector, https://www.hkma.gov.hk/eng/news-and-media/press-releases/2024/08/20240813-6/ (accessed on 11 April 2025).
[95] IT4LIA (2025), About IT4LIA - Italy for Artificial Intelligence, https://it4lia-aifactory.eu/about-it4lia/ (accessed on 10 December 2025).
[17] IVASS (2023), Survey on the use of Machine Learning algorithms by insurance companies in their relations with policyholders, https://www.ivass.it/pubblicazioni-e-statistiche/pubblicazioni/altre-pubblicazioni/2023/indagine-algoritmi/Esiti_indagine_Algogovernance_ENG.pdf?language_id=3 (accessed on 27 January 2025).
[108] Machado, P. (2025), Artificial intelligence and supervision: innovation with caution, https://www.bankingsupervision.europa.eu/press/speeches/date/2025/html/ssm.sp251014~5bc6e60334.en.html (accessed on 18 December 2025).
[109] McCaul, E. (2024), SSM digitalisation – from exploration to full-scale adoption, ECB, https://www.bankingsupervision.europa.eu/press/speeches/date/2024/html/ssm.sp240612_1~a3ace1ed8e.en.pdf (accessed on 18 December 2025).
[97] MRANTI Malaysia (2024), Mosti launches AI Sandbox Pilot Programme, https://mranti.my/happenings/news/mosti-launches-ai-sandbox-pilot-programme (accessed on 17 April 2025).
[43] NIST (2023), “NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0)”, https://doi.org/10.6028/NIST.AI.100-1.
[1] OECD (2026), Supervision of Artificial Intelligence in Finance: Challenges, Policies and practices, https://www.oecd.org/en/publications/supervision-of-artificial-intelligence-in-finance_92743dc1-en.html.
[3] OECD (2026), The interplay between Artificial Intelligence and Open Finance: Synergies, interdependencies and policy implications, OECD Publishing.
[116] OECD (2025), “AI openness: A primer for policymakers”, OECD Artificial Intelligence Papers, https://doi.org/10.1787/02f73362-en.
[41] OECD (2025), Artificial Intelligence in Asia’s financial sector: A review of country policies, https://www.oecd.org/content/dam/oecd/en/publications/reports/2025/12/artificial-intelligence-in-asia-s-financial-sector_b8532d0b/3385bbd8-en.pdf.
[80] OECD (2025), “Sharing trustworthy AI models with privacy-enhancing technologies”, OECD Artificial Intelligence Papers, No. 38, OECD Publishing, Paris, https://doi.org/10.1787/a266160b-en.
[57] OECD (2025), Supporting informed and safe use of digital payments through digital financial literacy, OECD Publishing, Paris, https://doi.org/10.1787/21de47d1-en.
[117] OECD (2025), “Towards a common reporting framework for AI incidents”, OECD Artificial Intelligence Series, https://doi.org/10.1787/f326d4ac-en.
[91] OECD (2024), AI Principles, https://www.oecd.org/en/topics/sub-issues/ai-principles.html (accessed on 29 April 2025).
[34] OECD (2024), “AI, data governance and privacy: Synergies and areas of international co-operation”, OECD Artificial Intelligence Papers, No. 22, OECD Publishing, Paris, https://doi.org/10.1787/2476b1a4-en.
[115] OECD (2024), “Defining AI incidents and related terms”, OECD Artificial Intelligence Papers, No. 16, OECD Publishing, Paris, https://doi.org/10.1787/d1a8d965-en.
[40] OECD (2024), Recommendation of the Council on Artificial Intelligence, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 (accessed on 18 September 2025).
[26] OECD (2024), “Regulatory approaches to Artificial Intelligence in finance”, OECD Artificial Intelligence Papers, No. 24, OECD Publishing, Paris, https://doi.org/10.1787/f1498c02-en.
[42] OECD (2023), “Common guideposts to promote interoperability in AI risk management”, OECD Artificial Intelligence Papers, No. 5, OECD Publishing, Paris, https://doi.org/10.1787/ba602d18-en.
[48] OECD (2023), “Generative artificial intelligence in finance”, OECD Artificial Intelligence Papers, No. 9, OECD Publishing, Paris, https://doi.org/10.1787/ac7149cc-en.
[100] OECD (2023), OECD Employment Outlook 2023: Artificial Intelligence and the Labour Market, OECD Publishing, Paris, https://doi.org/10.1787/08785bba-en.
[59] OECD (2023), OECD/INFE 2023 International Survey of Adult Financial Literacy, OECD, https://www.oecd.org/financial/education/international-survey-of-adult-financial-literacy-2023.htm (accessed on 25 January 2024).
[35] OECD (2021), Artificial Intelligence, Machine Learning and Big Data in Finance: Opportunities, Challenges and Implications for Policy Makers, OECD Publishing, Paris, https://doi.org/10.1787/98e761e7-en.
[58] OECD (2020), Recommendation of the Council on Financial Literacy, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0461 (accessed on 20 September 2021).
[2] OECD (2019), Artificial Intelligence in Society, OECD Publishing, Paris, https://doi.org/10.1787/eedfee77-en.
[37] OECD.AI (2026), Catalogue of Tools & Metrics for Trustworthy AI, https://oecd.ai/en/catalogue/faq (accessed on 11 December 2025).
[114] OECD.AI (2026), G7 reporting framework – Hiroshima AI Process (HAIP) international code of conduct for organizations developing advanced AI systems, https://transparency.oecd.ai/ (accessed on 3 April 2026).
[9] OECD and FSB (2024), OECD – FSB Roundtable on Artificial Intelligence (AI) in Finance: Summary of key findings - Financial Stability Board, OECD and FSB, Paris, https://www.fsb.org/2024/09/oecd-fsb-roundtable-on-artificial-intelligence-ai-in-finance-summary-of-key-findings/ (accessed on 9 December 2025).
[54] Office of the Comptroller of the Currency (2021), Comptroller’s Handbook: Model Risk Management | OCC, Office of the Comptroller of the Currency, Washington, D.C., https://www.occ.gov/publications-and-resources/publications/comptrollers-handbook/files/model-risk-management/pub-ch-model-risk.pdf (accessed on 16 December 2025).
[50] Office of the Superintendent of Financial Institutions (2025), Guideline E-23 – Model Risk Management (2027), Guideline E-23 – Model Risk Management (2027) undergoing publication process, https://www.osfi-bsif.gc.ca/en/guidance/guidance-library/guideline-e-23-model-risk-management-2027 (accessed on 16 December 2025).
[65] Open Finance Brasil (2025), Dashboard do Cidadão, https://openfinancebrasil.org.br/dashboard-do-cidadao/ (accessed on 12 December 2025).
[78] Penedo, A. and P. Kramcsák (2023), “Can the European Financial Data Space remove bias in financial AI development? Opportunities and regulatory challenges”, International Journal of Law and Information Technology, Vol. 31/3, pp. 253-275, https://doi.org/10.1093/IJLIT/EAAD020.
[49] Perez-Cruz, F. et al. (2025), “Managing explanations: how regulators can address AI explainability”, Bank for International Settlements - FSI Occasional Papers 24, https://www.bis.org/fsi/fsipapers24.htm (accessed on 15 December 2025).
[112] Project Apertus (2025), Democratizing Open and Compliant LLMs for Global Language Environments: Apertus v1 Technical Report.
[113] Swiss-AI (2025), About Apertus, https://www.swiss-ai.org/apertus (accessed on 11 December 2025).
[85] UK Government (2025), AI Opportunities Action Plan, https://www.gov.uk/government/publications/ai-opportunities-action-plan/ai-opportunities-action-plan (accessed on 10 December 2025).
[84] UK Government (2023), A pro-innovation approach to AI regulation, Department for Science, Innovation and Technology (DSIT), https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach (accessed on 10 December 2025).
[105] UNESCO (2025), Expanding Capacity Building for Competent Authorities on AI: National, https://www.unesco.org/en/articles/expanding-capacity-building-competent-authorities-ai-national-trainings-across-eu (accessed on 15 December 2025).
[46] US Department of the Treasury (2024), Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector.
[56] World Bank (2025), Digital Progress and Trends Report 2025: Strengthening AI Foundations, https://doi.org/10.1596/978-1-4648-2264-3.
Notes
Copy link to Notes← 1. The OECD AI Catalogue of tools offers a repository of tools for explainability and transparency. These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe (OECD.AI, 2026[37]).
← 2. The OECD AI Principles recommend, inter alia, shaping an enabling interoperable governance and policy environment for AI, encompassing the use of experimentation to provide a controlled environment, in which AI systems can be tested and scaled-up (OECD, 2024[40]).
← 3. For example, Swiss Apertus is a fully open-source suit of LLMs, pretrained exclusively on openly available data and drawing on content in over 1 800 languages (Project Apertus, 2025[112]).
← 4. Input by BdI to the Project Supervisory Questionnaire, as of 30 April 2025.
← 5. Input by BdI to the project workshop held on 6 May 2025.
← 6. Input by IVASS to Project Supervisory Questionnaire, as of 30 April 2025.
← 7. Input by BdI to the Project Supervisory Questionnaire, as of 30 April 2025.
← 8. Input by IVASS to Project Supervisory Questionnaire, as of 30 April 2025.
← 9. Input by the Italian authorities to the project workshop held on 6 May 2025.
← 10. Input by IVASS to the Project Supervisory Questionnaire, as of 30 April 2025.
← 11. Input by CONSOB to the Project Supervisory Questionnaire, as of 30 April 2025.
← 12. Input by IVASS to the Project Supervisory Questionnaire, as of 30 April 2025.
← 13. Input by CONSOB to the Project Supervisory Questionnaire, as of 30 April 2025.
← 14. Input by the Italian authorities to the Project Supervisory Questionnaire, as of 30 April 2025.
← 15. Input by IVASS to Project Supervisory Questionnaire, as of 30 April 2025.
← 16. The Hiroshima Code of Conduct and Reporting Framework is targeted at AI developers and deployers and could be a useful guidance from which to build on regarding transparency disclosures (OECD.AI, 2026[114]).
← 17. i.e. those not classified as critical based on EU classification and regulatory framework.
← 18. In several recent cases, AI-generated deepfakes, namely manipulated video and audio content featuring prominent Italian politicians was used to promote unauthorised crypto platforms and fictitious investment opportunities. Italian financial authorities (i.e. CONSOB) have significantly intensified the efforts to address the use of AI-generated deepfakes in fraudulent investment schemes.
← 19. Including the G7 Cyber Expert Group’s 2025 statement on AI and cybersecurity.,
← 20. The OECD pas proposed a common framework for reporting AI incidents with 29 criteria across eight dimensions (OECD, 2025[117]), as well as definitions for AI incidents and related terms (OECD, 2024[115]).
← 21. The Digital Omnibus proposal aims to consolidate and streamline rules in Data Governance Act, with the aim of enhancing the attractiveness of certain data sharing mechanisms (EU, 2025[36]).
← 22. This section addresses the strategic rationale for joint evaluation; concrete sandbox eligibility and process changes are discussed on section 3.2.6.
← 23. AI openness exists on a spectrum ranging from fully closed systems with restricted access to fully open models that permit unrestricted access, modification, and use (OECD, 2025[116]). This spectrum encompasses various system components, including data, code, and documentation. Recognising this range is essential for understanding the policy implications of different levels of openness across these components. This report uses the term open-weight models to refer to foundation models with publicly available trained weights. These models can generate content and perform a variety of tasks across different applications. While licensing is an important aspect of the discussion surrounding the availability of AI models, this report focusses on open weights due to their growing relevance in policy discussions about the benefits and risks associated with these models.
← 24. Apertus is a fully open-source suit of LLMs, pretrained exclusively on openly available data and drawing on content in over 1 800 languages (Project Apertus, 2025[112]). The Apertus LLMs were developed as part of the Swiss AI Initiative, led by École Polytechnique Fédérale de Lausanne and ETH Zurich (Swiss-AI, 2025[113]). It was developed by a team of cross-disciplinary Swiss researchers, engineers, and students, drawing also on infrastructure expertise of engineers at the Swiss National Supercomputing Centre (CSCS). The Apertus models reflect Swiss data protection laws, copyright laws and the transparency obligations under the EU AI Act.
← 25. The OECD in co‑operation with the European Commission – SG REFORM and Bank of Italy organised a roundtable event at the premises of BdI on 12 and 13 June 2025, constituting one of the project outputs. It convened experts from the Italian financial private sector, as well as market players with global reach. It also featured experts from national and regional regulatory and supervisory authorities that shared their best policies and practices in creating the conditions necessary for the broader safe deployment of AI in finance. The Roundtable was attended by over 50 participants including 21 speakers from 16 different authorities from EU and non-EU OECD Member countries, 17 finance industry representatives from Italy and other OECD Member countries, 2 academics and 2 global technology company representatives.







