Rapid technological advancements, growing global uncertainties, increasing competition, and the need to address global and societal challenges are increasing demands for science, technology and innovation policy. To respond to these challenges, policy needs to be agile: proactive, timely and responsive. Strategic intelligence and policy experimentation enable policymakers to “tool up” for agility. Strategic intelligence can provide timely insights through anticipatory and real-time evidence production, while policy experimentation enables testing new ideas and critically evaluating policy impacts. Together these approaches support evidence based policymaking.
OECD Science, Technology and Innovation Outlook 2025
7. Tools for agility: Actionable strategic intelligence and policy experimentation
Copy link to 7. Tools for agility: Actionable strategic intelligence and policy experimentationAbstract
Key messages
Copy link to Key messagesIn times of turbulence, or in situations of great strategic need, science, technology and innovation (STI) policymaking must be rapid in highly uncertain situations. The COVID-19 pandemic revealed challenges in the rapid response to create STI solutions. STI solutions may also be entangled in regulatory reconfigurations, particularly for new and emerging technologies, where the pace of regulatory change may slow down the emergence, development and deployment of novel technology solutions. In such situations, agile policies are needed that are informed (by appropriate strategic intelligence), that can be adaptive (through integrating learning by doing) and innovative (through experimentation of new policy approaches).
Agility in STI policymaking means anticipating and quickly adapting to new trends, challenges and circumstances by focusing efforts where they are needed the most. Agile policies are proactive, timely and responsive, allowing decision-making bodies to swiftly implement policies, adjust to unexpected situations, halt ineffective ones and redefine strategies as necessary.
Strategic intelligence and policy experimentation constitute core reinforcing priorities to boost policy agility. Despite their recognised potential, there are several barriers to implementing agile, adaptive and responsive STI policies, including institutional rigidity and risk aversion within public administrations, insufficient capacities and skills for their implementation, and challenges in scaling successful initiatives.
Fostering the use of strategic intelligence and policy experimentation among policymakers requires institutionalising experimentation by embedding it into national programmes and frameworks; increasing flexibility and adaptability within bureaucratic structures (through better co-ordination mechanisms and simplified processes); and investing in training programmes for public sector officials for them to embrace experimental approaches.
What happens after experimentation ultimately determines its policy impact. Ensuring there is a clear pathway for scaling up interventions that prove successful or phasing down those that fail is key. Those decisions, in turn, rely on rigorous evaluation processes as the basis on which to learn about the performance of a policy initiative.
Ensuring that structures put in place for policy experimentation remain reversible and adap JenErgieRea is key to facilitating scaling up or discontinuing initiatives without major disruption. Another key element is that responses to the outcomes of evaluations are not held back by vested interests.
Introduction
Copy link to IntroductionGrowing global uncertainties, increasing competition and the urgent need to address societal challenges are increasing demands on STI policies. Traditional policy approaches have proven inadequate to deal with the much larger transformation challenges. To remain effective, policies must be agile: adaptive, forward-looking, and capable of responding to complex and evolving challenges.
Strategic intelligence, in the public and private sectors, and policy experimentation constitute core reinforcing priorities to boost agility. Strategic intelligence – usable knowledge that helps policymakers understand the impacts of STI and anticipate future developments (Robinson et al., 2021[1]) – enables governments to harness the benefits of emerging technologies while mitigating risks. It also serves as a foresight tool, helping policymakers prepare for and respond to new challenges (Robinson, Winickoff and Kreiling, 2023[2]).
Policy experimentation in its various forms is another component of agility for STI policy and refers to the deliberate implementation of small-scale and/or temporary policy interventions designed to test the outcomes of new approaches. This chapter focuses on building small-scale environments for policy experimentation in the form of innovation policy labs and regulatory sandboxes; and using assessment methods for policy experimentation, as exemplified by randomised control trials (RCTs). Other dimensions of policy experimentation, notably in relation to stakeholder engagement, are left out in view of the space constraints and complexity of also covering those topics as part of the analysis.
This chapter examines how strategic intelligence and policy experimentation can enhance agile and informed decision making in a rapidly changing world. It includes country examples and highlights key challenges and policy lessons learnt regarding how to overcome those challenges and introduce agility.
This chapter is structured as follows. It begins with explaining why there is a current need for agility and introduces the concepts of strategic intelligence and policy experimentation. It then outlines key requirements for their implementation. The chapter then moves to a discussion of strategic intelligence and policy experimentation in practice, providing specific country examples. The chapter then examines key challenges limiting broader adoption of agility and policy responses. The final section concludes and sets the agenda ahead.
Why agile policymaking is needed
Copy link to Why agile policymaking is neededWhat is policy agility?
Agility entails proactive, timely and responsive policymaking, being able to anticipate and adapt fast to new circumstances, trends and challenges by focusing efforts where they are needed the most (OECD, 2024[3]; Arnold et al., 2023[4]). It ensures decision-making bodies remain flexible, able to adjust to unexpected situations, halt ineffective policies and redefine strategies as needed. Clear responsibilities and feedback mechanisms are essential to support this process (Weber et al., 2021[5]).
Agility marks the change from policymaking based on traditional “tried and tested” routines along the policy cycle to more adaptive cycles (Cairney, 2012[6]; Haddad et al., 2022[7]). Unlike traditional policy cycles that place an emphaisis on incremental policy learning over long time frames in relatively static (slowly evolving) institutions, agile processes prioritise real-time policy learning, allowing for faster, more responsive decision making (Figure 7.1). Agile policies work best when the policymaking institutions themselves can learn from policy experimentation and can adapt and become agile institutions (as opposed to the rather static institutions prevalent in traditional policymaking). Each approach has its strengths. Traditional policy cycles follow tried and tested routines, are relatively predictable, and can draw on cumulative learnings – this is particularly useful in highly predictable situations. Traditional policy cycles allow other stakeholders to adapt their plans, which is particularly useful for long-term strategy making. Agile policy cycles are suitable during times of turbulence and low predictability; this allows for learning while doing and is particularly useful in times of crises or of new opportunity but high uncertainty.
Figure 7.1. Incremental and dynamic policy cycles for stable and agile institutions
Copy link to Figure 7.1. Incremental and dynamic policy cycles for stable and agile institutions
In agile policy processes, the policy cycle is highly iterative due to factors like uncertainty and evolving technologies. The initial phase recognises the challenge of defining policy goals in this dynamic environment. Next, policy approaches co-evolve with broad objectives through iterative design, leading to potential strategies for testing. This phase also considers alternative policy pathways for future use while building both internal and external legitimacy through stakeholder engagement. Implementation involves tailoring and testing policies via tools like regulatory sandboxes, with continuous monitoring that integrates “learning-by-doing” for real-time adaptation.
What is driving demand for STI policy agility?
OECD countries widely recognise the need for agile STI policies (OECD, 2024[8]) due to:
The pace of transformative technological change: Many technologies are advancing at an unprecedented pace. Artificial intelligence (AI) tools like ChatGPT, for example, gained millions of users within weeks after its launch, demonstrating how quickly innovations can change the way people work, communicate and access information. Such technologies are transforming industries by automating tasks, enhancing customer service and optimising decision-making processes. The convergence of synthetic biology with AI and automation is likely to accelerate transformative innovations in a range of sectors (see Chapter 5). These technologies require policies that can keep pace with the changes to support further business innovations and ensure that adaptive regulations protect consumers from risks such as misinformation, privacy breaches and unfair market practices.
The need for preparedness to operate in a context of uncertainty: The future is marked by uncertainty and vulnerabilities, but also emerging opportunities, requiring policies that can adapt to rapidly changing circumstances. Global crises, like the COVID-19 pandemic, have revealed gaps in resilience, showing how existing systems can be overwhelmed by unexpected shocks. These events highlight the importance of policies that can respond quickly to safeguard public health and ensure economic stability while managing other crises. Similarly, geopolitical tensions underscore the need for policies that can address the broader consequences of conflicts, like disrupted supply chains and migration flows. Without agile policy responses, societies risk greater instability, economic disruption and long-term social challenges. Policy agility is also essential to seize new opportunities for more sustainable and inclusive development.
The need to strengthen national competitiveness in key strategic areas: In an era of intensified technology-based international competition, the concepts of “technology sovereignty” and “strategic autonomy” – which refer to a polity’s capacity to act strategically and autonomously in an era of intensifying global technology-based competition – have gained ground in national policymaking (see Chapter 2). Traditional policy approaches may struggle to keep pace with technological change and the opportunities and challenges it brings for markets and society. Agile policymaking enables governments to be more responsive and to target support where it is needed the most.
The need to address global and societal challenges: Addressing global challenges such as those related to food security, extreme weather events, and poverty and resource depletion requires significant innovation. The International Energy Agency states that most CO2 emissions reductions through 2030 will come from technologies already available. However, by 2050, 35% of these reductions require technologies that have not yet been developed. To meet these goals, major innovation efforts are needed this decade to bring these technologies to market (IEA, 2023[9]). To drive this change, policies must be agile and experimental, allowing public resources to be used more effectively in supporting green technologies, such as renewable energy, carbon capture and electric vehicle infrastructure. Agile policies can break down barriers to scaling these technologies, helping them reach broader adoption more quickly.
Agile policy requirements: Six support actions
Copy link to Agile policy requirements: Six support actionsThe need for agile policies requires actions that can be served through tools and approaches that support and inform agile policy processes. Agility can be promoted in different ways. Figure 7.2 presents the agile policy cycle encircled by six “support actions” that are particularly helpful for catalysing and informing agile policies.
Figure 7.2. Six support actions to catalyse and inform agile policymaking
Copy link to Figure 7.2. Six support actions to catalyse and inform agile policymaking
The six support actions for agile policy are:
1. Undertake a preliminary assessment: For situations of high complexity, uncertainty or urgency, it is not always evident whether policy action is necessary or how it should be targeted. Thus, a preliminary diagnosis of the situation is required. For example, the OECD Framework for Anticipatory Governance of Emerging Technologies (OECD, 2024[10]) argues that an appropriate preliminary diagnosis process can, with limited resources, help scope the potential policy issue and focus further intervention. This provides an opportunity for sense-making, learning and targeted action (including the decision not to take any policy action).
2. Anticipate multiple options: Rapid and agile policies can benefit from identifying multiple policy pathways that can potentially be followed. Tools like adaptive foresight can help identify multiple potential modes of action while also providing some insights to their potential impact. While an eventual choice will be made to pursue a particular policy pathway, the alternative potential options can be used as a benchmark for comparison during the agile policy cycle.
3. Draw on collective intelligence: To support a robust design of policies and facilitate legitimacy of the novel policy measure with those that the policy measure will impact, it is advantageous to engage with, and harvest intelligence from, a wide range of stakeholders. Incorporating collective intelligence into research and innovation policymaking ensures that policies are robust and aligned with societal and environmental needs. Engaging a wide array of stakeholders, including researchers, industry leaders and the public, provides diverse perspectives and enhances the design and legitimacy of a new policy measure. Tools that can assist in this support action include, but are not exclusive to, participatory technology assessment and multistakeholder dialogues.
4. Trial policy innovations: Agile policy is characterised by the need to develop and apply policies to rapidly changing contexts, often requiring new policy approaches. This can be catalysed by supporting the testing and assessment of new approaches during the policy cycle. Policy experimentation in its many varieties can provide support for this (see Figure 7.2).
5. Evaluate for real-time steering: While ex post evaluation is useful for stable and predictable contexts, where policy learning can take time, when circumstances are changing, such learnings may come too late. There is thus a need to shift from evaluation for ex post assessment of a policy’s success towards evaluation as a means for real-time learning. Here, formative evaluation tools and approaches, which place an emphasis on learning and adapting policies during their implementation, are a key support action.
6. Continuous scan of external issues: This entails staying abreast of the changing context outside of the organisation, be it indications of evolving policy drivers or weak signals of rapidly emerging technologies that may become a concern. Such context scanning is useful to trigger new agile policy cycles and/or to place current policy cycles in the changing and evolving context.
While the support actions are more suited to certain stages in the policy cycle than others, they are not exclusive to a particular stage. The six support actions act as requirements for evidence that can be provided by various strategic intelligence tools and approaches and a variety of forms of policy experimentation. They are presented in Table 7.1 and are unpacked in detail later in this chapter.
Table 7.1. Matching strategic intelligence and policy experimentation to agile support actions
Copy link to Table 7.1. Matching strategic intelligence and policy experimentation to agile support actions|
Agile support action |
Strategic intelligence |
Policy experimentation |
|---|---|---|
|
Continuous scan of external issues |
Horizon scanning and technology monitoring |
|
|
Undertaking a preliminary assessment |
Situation analysis Forward-looking technology assessment |
|
|
Anticipating multiple options |
Adaptive foresight |
|
|
Drawing on collective intelligence |
Multistakeholder participation (including participatory technology assessment) |
Policy innovation labs Regulatory sandboxes Randomised control trials |
|
Trialling policy innovations |
Policy innovation labs Regulatory sandboxes Randomised control trials |
|
|
Evaluating for real-time steering |
Formative (real-time) evaluation |
Regulatory sandboxes |
Strategic intelligence for agile and adaptive policy
Copy link to Strategic intelligence for agile and adaptive policyStrategic intelligence will have a key role to play in building policies for new science and technologies whose importance is clear but the precise implications and pathways are still uncertain. For example, emerging quantum technologies (such as quantum computers, sensors and communications), promise to transform multiple industries, bolster advances in traditional computation and help tackle complex societal challenges through the harnessing of quantum mechanics.
Strategic intelligence refers to knowledge and evidence on current and future developments of new STI and their potential impacts on the economy and society. A broad range of methods can provide strategic intelligence, such as statistical benchmarking, forecasting and modelling, foresight, technology assessment, systems and pathway mapping, and technology monitoring and evaluation. Strategic intelligence providers include agencies carrying out technology foresight and assessment, national academies of science and technology, statistical offices and agencies, ad hoc national commissions, regulatory bodies, and standard-setting bodies.
In the context of STI, strategic intelligence relates to usable knowledge that helps policymakers understand the impacts of STI and anticipate future developments (Robinson et al., 2021[1]). Particularly for rapidly emerging and evolving technology areas, where few systematic data are available to provide trend analysis or direct evidence, strategic intelligence tools and approaches are mobilised to fill the gap.
Strategic foresight and technology assessment have a long history (Robinson and Doherty, 2025[11]), but are now challenged to evolve to suit current STI policy needs, with new approaches to meet these new demands – many of which relate to the need for speed and agility (Robinson, Winickoff and Kreiling, 2023[2]).
Detecting early signals of technological and socio-economic change: Continuous horizon scanning
Horizon scanning functions as a systematic exploratory method designed to detect early indicators of potentially significant technological and socio-economic developments. Horizon scanning identifies “weak signals”, which are early-stage trends or emerging issues that may evolve into transformative and disruptive changes. These signals undergo structured analysis to assess their relevance, trajectory and potential implications for policy and decision making. This process enables policymakers to anticipate emerging challenges and opportunities, particularly in rapidly evolving domains such as synthetic biology, metamaterials, or quantum science and technology.
Continuous or regular horizon scanning provides decision makers with insights into the drivers of change, supporting the formulation of proactive policies that align with evolving technological and societal landscapes (Box 7.1). The process primarily relies on desk-based research, drawing upon existing literature, reports and data sets. To enhance the robustness of findings, horizon scanning is supplemented with expert consultations, participatory workshops and structured foresight exercises to refine, prioritise and contextualise identified signals. Consequently, the effectiveness of horizon scanning is contingent upon the diversity of the consulted experts, the breadth and reliability of data sources, and the methodological rigour applied in synthesising insights.
Unlike conventional trend analysis, which are based on past (including even recent past) data, which extrapolates trends (from what is known), horizon scanning explores areas of potential importance that have not yet fully materialised (exploring the unknown). Horizon scanning, therefore, deals inherently with high levels of uncertainty. Many weak signals may fail to materialise, evolve in unexpected directions, or challenge prevailing assumptions within a given technological or policy domain. As such, horizon scanning serves as an adaptive intelligence tool, equipping policymakers with early warning capabilities to navigate complexity and uncertainty in the innovation landscape. Advancements in automated data analytics have introduced new methodologies in horizon scanning, including web scraping, large language models and machine learning-driven data mining to detect emerging trends at an earlier stage. These computational approaches enhance the ability to capture weak signals in real time, complementing traditional expert-driven scanning processes. This automation in gathering and analysing disparate data sources is a new turn in strategic intelligence – more complex and heterogeneous data can be mobilised in near-real time – opening the possibility for more complex, more agile and more timely horizon scanning.
Box 7.1. Examples of horizon scanning for agile policy
Copy link to Box 7.1. Examples of horizon scanning for agile policyIn Germany, the Federal Ministry of Research, Technology and Space developed two data-driven analysis tools for internal use in the federal government. The tools were developed as part of its foresight process. The Technology Monitoring Dashboard provides an overview of the international competitive landscape, displaying key innovation indicators (publications, patents, start-ups, venture capital funding) for 16 technology areas and over 90 individual technologies. The second tool, the Emerging Technologies Radar, which is still under development, uses an experimental approach to identify and evaluate emerging technologies using several artificial intelligence methods.
Another illustrative example is how the UK Government Office of Science conducts continuous technology horizon scanning on a weekly basis to systematically identify emerging trends and weak signals within the science and technology landscape. This process begins by gathering inputs, primarily through desk-based research, which incorporates diverse sources such as high-impact scientific publications (for example, general science journals like Science and Nature), third-sector reports, white papers and select social media channels. Where feasible, additional insights are incorporated through participation in specialist conferences, expert consultations and industry events to ensure a comprehensive and dynamic assessment of technological developments. Each week, the collected intelligence is synthesised to identify patterns, emerging themes and novel advancements. A shortlisting process is then undertaken to determine which signals hold the most potential impact and policy relevance. These shortlisted items are subsequently reviewed within a broader analytical group, where key findings may be escalated for inclusion in an early warning brief to inform senior policymakers and stakeholders. As part of this, so-called “rapid technology assessments” are developed to provide the basic information on the emerging technology of interest stemming from the horizon scanning activity.
The example is interesting, and not only for continuous and regular scanning. The UK Government Office for Science is developing a structured list of weak signals. The aim in the future is to prioritise this list based on the likelihood of the signal to “surprise” or be disruptive in the context of the UK government. Early identification of emerging technologies that may “surprise” is essential for enabling more proactive assessments and policymaking. The UK Government Office for Science is currently working on developing metrics based on understanding of technology progression, to determine a weak signal’s likelihood for disruption. These metrics are to be reviewed regularly to ensure that prioritisation of signals remains responsive to dynamic shifts in the technological landscape.
In Japan, the National Institute of Science and Technology Policy (NISTEP) has regularly conducted horizon scanning to feed into broader foresight processes that inform policy, beginning with the 11th NISTEP Science and Technology Foresight Survey (NISTEP, 2019[12]). This survey incorporated horizon scanning as the first phase of a foresight study to identify emerging trends; potential future developments; and early signs of change in the relationship between society, science and technology. To capture the societal data, the activity extracted trends from existing materials (including press releases and policy documents) and incorporated insights from regional workshops that explored desired futures from across different regions of Japan. This was complemented by gathering expert opinions through international workshops. To capture scientific and technological insights, data was extracted from materials such as meeting minutes of the National Diet, policy reports and text mining of databases (e.g. the Database of Grants-in-Aid for Scientific Research). This was also complemented by gathering science and technology expert opinions and scanning research and development-related press releases. The outputs of these activities have been referenced in the context of science, technology and innovation policy formulation, including those for the Sixth Science, Technology and Innovation Basic Plan. More recent examples of horizon scanning include online questionnaires on emerging and weak signals with the NISTEP science and technology expert network made up of approximately 1 700 experts.
Getting to grips with the context: Situation analysis
Agile policy approaches are considered worthwhile when circumstances are rapidly changing, there is high uncertainty, or an event or opportunity means that policy action is urgent. Situation analysis, therefore, is an essential step when agile policies are being considered – if circumstances suggest agile policies as a solution, it is highly likely that the “situation” is volatile and requires a preliminary diagnosis.
For controversial or potentially game-changing technologies, situation analysis can take the form of identifying the key issues at play and/or the stakeholders that may be affected by a new technological innovation. For agile policies seeking to harness a new or emerging technology, mapping the key actors in the innovation ecosystem can be beneficial to target further intelligence gathering and help identify policy inroads. For agile policies targeted at driving transformational change at the sector or system level, mapping the key actors and infrastructures in a particular system will help set a baseline for a portfolio of policy interventions. Box 7.2 gives a few examples.
Box 7.2. Examples of situation analysis that can be used for agile policy
Copy link to Box 7.2. Examples of situation analysis that can be used for agile policyIn 2021, the European Parliament’s Science and Technology Options Assessment Unit conducted an online stakeholder engagement exercise elucidating the societal concerns surrounding a highly topical and controversial issue – genome editing in crops – to support decision making by the members of the European Parliament who were considering whether regulatory change would be necessary for this rapidly emerging and historically controversial technology field. The situation analysis at the beginning of the process aimed to identify the main hopes and concerns as well as the key stakeholders that are, or could be, implicated in the development and deployment of new genomic technologies applied to crops. The Science and Technology Options Assessment Unit uses the STEEPED approach to support this activity – a systematic way of conducting an initial overview of existing and emerging opinions (hopes and fears) concerning the topic that is being explored. STEEPED as an acronym represents an exploration across seven perspectives: societal, technological, economic, environmental, political/legal, ethical and demographic. The STEEPED approach was also used to identify representatives of the various stakeholder groups that would, or could be, concerned by the new genomic technologies for crops and to ensure a broad spectrum of opinions. In this way, situation analysis combined identifying the hopes and concerns around a technology field and the potentially concerned stakeholder groups (Robinson, Winickoff and Kreiling, 2023[2]).
Situation analysis can also be at the level of understanding the innovation capacity of a region, country or sector. In recognition of the strategic importance for its manufacturing sector of supporting innovation in advanced materials, the province of Quebec, Canada, created PRIMA Québec to identify and bolster the province’s competitiveness. It undertook an initial mapping of the key actors in the industrial advanced materials ecosystem, revealing a critical mass of ~340 firms in 2018 and 570 in 2024 that mobilise advanced materials research and development to create products and services. Intended to be a baseline study with further iterations, the industrial ecosystem mapping was an essential tool for identifying and improving the sector’s market position and facilitating the implementation of public policies to better support advanced materials in the province. Repeating this situation analysis through regular mapping activities allowed PRIMA Québec to better target science, technology and innovation policies to support the growing ecosystem and leverage this fast-moving and strategic area of critical technologies (PRIMA Québec, 2024[13]).
Sources: Robinson, Winickoff and Kreiling (2023[2]); PRIMA Québec (2024[13]).
An important approach to situation analysis includes systems mapping with associated indicators and statistics. Changes in the fundamental properties of a system and the way it behaves have important implications for analysis and the estimation or forecasting of future outcomes. Simple extrapolation from past experience will fail to foresee the way that a system may behave after it has been transformed or once the process of change has begun. Many of these aspects are not well-served by existing metrics, and the current knowledge and evidence base of indicators and statistics that supports policy decisions can further evolve to meet the complexity and uncertainty of STI-enabled transformative change.
Understanding the implications of emerging technologies: Forward‑looking technology assessment
Technology assessment (TA) plays a key role in providing strategic intelligence on new and emerging technologies. As an evidence-based and interactive process, TA systematically examines the societal, economic, environmental and legal dimensions of technological innovation. It serves to inform public debate, shape research and development agendas, and support the formulation of policies that enable and regulate technological progress.
Box 7.3. Examples of forward-looking technology assessment
Copy link to Box 7.3. Examples of forward-looking technology assessmentWith regards to technology assessment (TA) for the legislative branch, in the United States, the Science, Technology Assessment, and Analytics group within the Government Accountability Office exemplifies expert-driven TA. It delivers high-quality analyses to Congress, identifying technical challenges and outlining policy options with associated risks, opportunities and implementation considerations. The rapid technology assessments offer concise and timely evaluations of specific technologies for agile policymaking.
For the executive branch, another example from the United States is the Novel and Exceptional Technology and Research Advisory Committee at the National Institutes of Health (NIH), which was set up to provide recommendations directly to the NIH Director and facilitates public dialogue on the ethical, legal and social implications of novel biotechnologies. For example, one of the committee’s past activities was setting up the Gene Drives in Biomedical Research Working Group, which considered whether existing biosafety guidance is adequate for contained laboratory research using gene drive technology and conditions (if any) under which the NIH could consider supporting field release of gene drive-modified organisms.
The OECD is developing an experimental approach referred to as forward-looking technology assessment (FTA). While most TAs look at an existing technology area and explore the potential societal, economic and environmental impacts, FTAs also look forward at future developments in the technology area itself. In this way, the FTA can get ahead of technology developments and support agile policies, including anticipatory governance of emerging technologies. Two FTA activities are currently being conducted. The first looks forward at the future convergence of synthetic biology, artificial intelligence and robotics to explore a range of policy issues such as skills and workforce, research security, the clash of governance cultures, etc. Backcasting from this, forward-looking intelligence can aid in making a preliminary diagnosis of what policy actions should be addressed in the near term and which do not require immediate action. A second FTA focuses on the future embedding of quantum technologies in a variety of sectors, such as health and the space sector. This activity assesses the innovation ecosystem factors that could catalyse and nurture the translation of quantum technologies into various sectors and also anticipates the potential impacts of the integration of quantum into a variety of sectors.
Sources: Robinson and Doherty (2025[11]).
A core function of TA is to enhance understanding of the current state and potential implications of emerging technologies. This is particularly vital in the context of complex and uncertain technological developments such as synthetic biology, neurotechnologies, and quantum science and technology. TA contributes by structuring fragmented or ambiguous information and transforming it into actionable insights to guide policy decisions. Institutionalised TA efforts provide targeted analyses to legislative and executive bodies (Examples are provided in Box 7.3).
Exploring the future with alternative scenarios: Adaptive foresight
Strategic foresight encompasses a suite of methodologies designed to enable policymakers to systematically explore plausible future developments, particularly under conditions of uncertainty and complexity. Rather than aiming to predict a singular or most likely future development, foresight processes map a range of alternative futures and identify the opportunities, risks and interdependencies that could shape policy effectiveness across diverse contexts. This approach helps expand the scope of policy deliberation by challenging conventional assumptions and surfacing latent connections across policy domains.
In the context of technology policy, strategic foresight plays a critical role in assessing how emerging technologies may interact with evolving societal, economic and institutional environments. Typically, this is achieved through the construction of scenarios that explore how external drivers – often beyond the control of individual organisations – may influence the conditions into which technologies are introduced. These scenario exercises serve both an analytical and introspective function: they inform strategic choices within current policy remits while also prompting reconsideration of broader systemic assumptions that underpin those choices.
Importantly, strategic foresight supports long-term, holistic thinking and enhances policy systems’ capacity to act with agility and coherence. By fostering shared reflection among stakeholders, foresight processes help align technology governance with societal values, reinforce interdepartmental co-ordination, and contribute to the development of robust and resilient policy strategies. Ultimately, the value of strategic foresight lies not in foreseeing the future but in enabling preparedness for a range of plausible futures. In doing so, it strengthens the adaptive capacity of technology policy in a rapidly changing world.
As a tailored tool for agile policymaking, adaptive foresight represents a strategic evolution of forward-looking policy practices, blending elements of foresight, adaptive planning and contemporary innovation theory. It seeks to inform policy decisions under conditions where precise prediction is neither possible nor sufficient. At its core, adaptive foresight is both a method and a mindset that supports strategic intelligence for navigating disruptive change. It is grounded in three pillars: 1) participatory foresight practices; 2) adaptive strategic planning that incorporates real options thinking; and 3) an evolutionary understanding of innovation as complex and co-evolutionary. This approach acknowledges the Collingridge Dilemma: in early innovation stages, there is too little information to steer developments effectively; later, entrenched trajectories are difficult to influence. Rather than defining fixed endpoints, adaptive foresight facilitates sense-making and option-building under shifting contexts. Adaptive foresight thus promotes flexibility, anticipatory learning and iterative policy framing.
The adaptive foresight process typically involves elements of futures analysis (for example, scenario development to explore divergent futures), engagement with experts and stakeholders to ensure relevance and legitimacy, real-time learning loops that enable policy feedback and recalibration, and the integration of strategic options to preserve flexibility in the face of uncertainty (Examples are provided in Box 7.4).
Box 7.4. Examples of adaptive foresight
Copy link to Box 7.4. Examples of adaptive foresightThe European Commission’s techno-economic foresight on creative content industries, the European Perspectives on the Information Society project (Abadie, Friedewald and Weber, 2010[14]), is one example of adaptive foresight. This project used an adaptive approach to account for rapid digital transformations in the creative industries. By embedding scenario techniques within a flexible, stage‑gated process, the project produced nuanced policy-relevant insights while maintaining the ability to adjust methods and focus based on emergent knowledge.
Another example of adaptive foresight is the “Foresight on Demand” initiative which supported the European Commission’s 2nd Strategic Plan for Horizon Europe (2025-2027) through a broad, multi‑actor process (European Commission, 2023[15]). It combined context scenario-building, expert workshops and public consultations to explore disruptive areas such as climate engineering, artificial intelligence, global governance and health futures. The results not only informed priority-setting but also identified governance and resilience strategies, underlining the role of foresight in adaptive programming.
Sources: Abadie, Friedewald and Weber (2010[14]); European Commission (2023[15]).
By structuring uncertainty rather than removing it, adaptive foresight enhances strategic agility by enabling decision makers to anticipate and prepare for a range of plausible futures. Adaptive foresight can help frame the policy goal and provide input into the iterative design of the agile policy, which is key for the agycycle.
Integrating multistakeholder insights in policy processes: Participatory methods
Engaging diverse stakeholders in the development of STI policies has become an increasingly recognised practice, fostering more inclusive, anticipatory and socially responsive policy frameworks. Stakeholders – including scientists, engineers, affected communities, investors, companies and citizens – offer unique perspectives that contribute essential (and often missing) knowledge and broaden the framing of policy development and implementation with contextual knowledge. Such a deliberative process strengthens the science-society relationship, enhances public trust and facilitates more effective communication about the emerging policy’s objectives (Paunov and Planes-Satorra, 2023[16]). However, engagement efforts that are pre-ordained with predetermined outcomes risk undermining these objectives, reducing trust and legitimacy (OECD, 2024[17]). A range of participatory methods exists to ensure that multistakeholder insights are integrated into policy and governance discussions (Box 7.5). Consensus conferences, citizen assemblies and citizen juries exemplify such participatory TA mechanisms, directly involving diverse societal actors in evaluating and shaping STI policy development and implementation.
Box 7.5. Examples of participatory methods for agile policymaking
Copy link to Box 7.5. Examples of participatory methods for agile policymakingBetween November 2021 and March 2022, approximately 50 citizens from different backgrounds met at the German Federal Ministry of Education and Research (BMBF; since May 2025: the Federal Ministry of Research, Technology and Space) Citizens’ Assembly for Research. The citizens contributed ideas, received advice from experts, and engaged in intensive discussions on how to further strengthen participation in research and research policy. The report of the Citizens’ Assembly for Research, with its 25 recommendations for action and 5 overarching guiding principles, was incorporated in the White Paper process for the development of the ministry’s Strategy for Participation in Research launched in 2023. In this way, Germany is taking concrete measures to ensure that multi-stakeholder insights are integrated into science, technology and innovation policy discussions.
Another case is the participatory development of Brazilian Artificial Intelligence Strategy (MCTI, 2021[18]). The objective was to crowdsource recommendations and commentaries to create a collective vision for “What AI do we want for Brazil?” and to ensure that the eventual strategy addressed societal needs and concerns. Collective intelligence was mobilised to support the design of the strategy, through stakeholder consultation over iterative drafts. This not only allowed tailoring the policy, it also meant that by the time the strategy was developed, a range of stakeholders were already aware of it and, in a way, co-owned it. This supported the legitimacy of the policy.
Another example was the Citizen and Multi-Actor Consultation on Horizon 2020 project, implemented from 2015 to 2018 under the European Union’s Horizon 2020 framework, which focused on integrating public engagement into research and innovation agenda-setting. Co-ordinated by the Danish Board of Technology, the project involved a consortium of 29 partners spanning 30 European countries. The project’s methodology drew upon various theoretical frameworks, including Responsible Research and Innovation, Participatory Technology Assessment, and Foresight. The project’s primary objective was to enhance the relevance and responsiveness of the European research and innovation agenda by engaging over 1 000 citizens across 30 countries to articulate desirable sustainable futures. A notable outcome was the formulation of 23 research topics, which were presented as suggestions for the Horizon 2020 work programmes. To validate and enrich these proposals, an extensive online consultation engaged over 3 400 participants, encompassing a broad spectrum of societal perspectives.
On a smaller scale, the Canadian Institutes of Health Research convened citizen panels to set priorities in areas like precision medicine and health data sharing (CIHR, 2021[19]). In 2021, one of these panels focused on unpacking issues and opportunities in the use of artificial intelligence (AI) in healthcare diagnostics. The outcomes of the panel informed federal guidelines on AI ethics in health. Although punctual, rather than iterative, this initiative reveals that participatory policy approaches like citizens’ panels can be applied to emerging technology policies that are informed by societal hopes and concerns (and thus are responsive) and include a wide range of stakeholders to build legitimacy and trust in a policy that is related to a potentially controversial area.
Sources: MCTI (2021[18]); CIHR (2021[19]).
These methods are particularly valuable for controversial and ethically complex technologies, as they facilitate multi-perspective dialogue, stimulate public and political debate, and contribute to more socially attuned technology governance. Moreover, participatory TA plays a convening role, fostering mutual understanding among stakeholder groups and enhancing public confidence in policy decisions through inclusive engagement.
Continuous feedback and learning: Formative (real-time) evaluation
Formative real-time evaluation has emerged as a distinct approach that diverges from traditional result-oriented evaluation methodologies by prioritising learning and adaptive capacity in the context of STI policies. Unlike conventional results-oriented evaluation frameworks, which primarily focus on accountability, performance measurement and adherence to predefined objectives, reflexive monitoring and evaluation facilitates critical reflection on existing assumptions (based on learning during the implementation of a policy), changing institutional positions (the overall mission of a policy organisation may have changed) and evolving external contextual factors. By fostering a continuous process of feedback, dialogue and critical inquiry, reflexive monitoring and evaluation enables policymakers and practitioners to identify, question and potentially reconfigure entrenched norms, governance models and institutional settings that may hinder agile policies for uncertain and complex policy areas such as rapidly emerging technologies or transformative innovation policies.
This reflexive approach acknowledges the dynamic and uncertain nature of innovation systems, recognising that initially defined goals may need to be adapted in response to emergent challenges and opportunities. As such, it serves as a mechanism for institutional learning in the policy-formulating organisation, ensuring that evaluation is not merely a retrospective exercise in compliance and impact assessment but a proactive tool for shaping adaptive and agile policy processes (Examples are provided in Box 7.6).
Box 7.6. Example of formative evaluation for agile mission-oriented research policy
Copy link to Box 7.6. Example of formative evaluation for agile mission-oriented research policyIn 2019, the French government launched the programme Cultiver et Protéger Autrement (Growing and Protecting Crops Differently), a priority research programme aimed at accelerating the transition toward zero-pesticide agriculture. Initiated by the Ministry of Higher Education, Research and Innovation and the General Secretariat for Investment, this initiative embodies a mission-oriented research programme. Recognising the long lead times required for research to influence practice (typically 10-20 years), the programme strategically directs fundamental research toward enabling pesticide-free agricultural systems by 2030-2040.
The funding programme supported ten transdisciplinary, trans-sectoral and multi-actor research projects, each funded with approximately EUR 2 million, with the aim of producing radical, potentially breakthrough innovations. These projects operate within controlled experimental environments to mitigate risk and inform broader application. However, the dual nature of the research – both open‑ended and mission-driven – necessitates navigating high levels of uncertainty regarding research outcomes and their practical impact.
To manage this complexity, the programme embedded a governance structure centred on strategic intelligence, designed to support the orientation, programming and execution of the ten mission‑oriented research projects. This structure enables adaptive steering through real-time learning and co‑ordination. Specifically, four core functions guided this process: 1) monitoring and learning from the project activities; 2) anticipating contextual developments; 3) assessing the performance of ongoing experiments; and 4) fostering synergies within the project portfolio and with external programmes (elsewhere in France and in Europe).
A central mechanism for strategic steering was the use of impact pathways, or projections of how the project’s research activities link to the broader mission goal. Each of the ten projects, as well as the funding programme as a whole, was tasked with constructing and iteratively refining these impact pathways. Initially, ex ante impact pathways provided the foundation for back-casting exercises to align research design with the envisioned transformation (allowing a further articulation of the mission goal). Subsequently, real-time monitoring transformed these pathways into “evolving benchmarks” for assessing progress, facilitating timely adjustments.
These impact pathways were co-developed through participatory workshops involving diverse stakeholders, ensuring the mobilisation of distributed intelligence and the integration of multiple perspectives. Crucially, the nested architecture of ten project impact pathways, the funding programme-level pathways and the overall policy aim enabled the programme manager to monitor progress, identify and exploit synergies, co-ordinate with complementary initiatives, and engage with key stakeholders to maximise collective impact.
Formative evaluation plays a crucial role in governance experimentation, supporting the real‑time adaptation of policy instruments and intervention strategies. Through “reflexive monitoring”, policymakers can assess the effectiveness of different governance instrument mixes and make necessary adjustments in near real time, ensuring greater responsiveness to technological advancements and market dynamics. This capability is particularly valuable in mission-oriented innovation policies, where policy agility and iterative learning are essential for achieving long-term strategic objectives.
Policy experimentation in practice
Copy link to Policy experimentation in practicePolicy experimentation involves the deliberate implementation of small-scale and/or temporary policy interventions designed to test the outcomes of new approaches (OECD, 2024[20]). The goal is to assess whether these interventions should be scaled up if successful or phased out if they do not achieve the desired results. Experimentation is essential for developing agile STI policies because it enables policymakers to test and refine approaches in real time, respond to unforeseen challenges, adapt to evolving conditions, and take data-driven decisions that are aligned with the needs of society and the economy.
This section discusses the following two types of policy experimentation and focuses on specific examples therein to be able to cover more ground:
1. Environments for policy experimentation, where new ideas and technology solutions can be tested at a small scale, based on which they can be later scaled up or phased out. Examples include policy innovation labs and regulatory sandboxes.
2. Methods for policy experimentation, aimed at monitoring and evaluating the impacts of diverse policy approaches and programmes. Examples include RCTs and in-field experiments.
STI policy experimentation goes beyond these two types (Arnold et al., 2023[4]; Bravo-Biosca, 2019[21]). It can also take the form of experimentation with governance models, such as initiatives to enhance cross-governmental collaborations or effectively involve citizens and businesses in policymaking processes (Paunov and Planes-Satorra, 2023[22]).
Experimental environments: Policy innovation labs and regulatory sandboxes
Policy innovation labs (PILs) are organisations or initiatives that apply experimental, scientific lab-like methods to generate and test innovative, evidence-based policy solutions on a small scale before wider implementation. They support agile policymaking by equipping the public sector with practical experience in experimentation and by actively promoting the use of innovative approaches. PILs often act as collaborative hubs, bringing together diverse stakeholders (including citizens, businesses, experts and policymakers) to analyse policy challenges and develop user-centred solutions (Bellefontaine, 2012[23]). These labs can be embedded within government or operate as external entities. In both cases, they act as instigators of change, challenging conventional processes and catalysing new ways of working across the policy system. They provide a space for knowledge mobilisation and policy innovation. Four types of PILs can be distinguished (Wellstead, 2020[24]) (see examples in Table 7.3):
1. Design-led labs: Concerned with the application of “design” thinking to policy and focused on “user-centred” methods such as visualisation techniques, and collaboration with citizens and other stakeholders to clarify problem definitions and co-create solutions.
2. Open government/data labs: Employ innovative approaches in data analytics such as applying new digital and web-based tools to open up and interrogate public data, therefore drawing on expertise from diverse participants to run and apply data analytics.
3. Evidence-based labs: Focused on the application of rigorous evaluation techniques, principally RCTs.
4. Mixed labs: Employ any one of these three approaches.
Table 7.2 outlines the key benefits and potential risks associated with PILs while Table 7.3 provides selected examples.
Table 7.2. Benefits and risks of policy innovation labs in science, technology and innovation policy
Copy link to Table 7.2. Benefits and risks of policy innovation labs in science, technology and innovation policy|
Benefits |
Limitations and risks |
|---|---|
|
Agility: Policy innovation labs (PILs) benefit from their small size, which allows them to act as agile change agents. With fewer oversight and accountability requirements than public sector organisations, they can take more risks, experiment with new ideas and adapt quickly. |
Difficulties in scaling up: PILs can lack a clear focus on how to scale up their ideas and experiments into real-world policies and systems. Policy proposals generated within the labs are often set within a “close” environment, therefore there is an imperative to provide a reasonable “way forward” if they are to be advanced. |
|
Collaboration: PILs promote a more participatory and design-focused approach to innovative policymaking. They foster collaboration by engaging stakeholders, encouraging participation and supporting co-creation. Unlike centralised government structures that operate in isolation, these labs adopt networked approaches, working closely with both internal and external stakeholders, including civil society organisations, the private sector, universities and research centres. |
Lack of genuine engagement: PILs operate within broad networks involving multiple stakeholders with diverse perspectives and interests. Ensuring meaningful participation is essential for their effectiveness. However, in some cases, stakeholder involvement may be limited to consultation rather than active engagement, reducing the impact of their input on decision making. |
|
Capacity building: PILs help public servants and managers develop practical skills, confidence and empathy in using innovative approaches, methods and tools. By fostering hands‑on learning, they support cultural shifts in public administration, enhancing both skills and mindsets. These labs serve as learning spaces that complement traditional training methods and encourage a more adaptive and forward-thinking public sector. |
Funding constraints: Organisational risk aversion and limited commitment from senior decision makers (both internal and external) can create funding challenges, restricting the resources available for innovation. |
|
Insufficient skills and expertise: PILs may face a shortage of in-house expertise in experimental approaches and encounter difficulties in attracting and retaining highly skilled staff. |
Sources: Monteiro and Kumpf (2023[25]); Lewis (2020[26]); Damgaard and Lewis (2014[27]).
Table 7.3. Selected examples of policy innovation labs for science, technology and innovation policy
Copy link to Table 7.3. Selected examples of policy innovation labs for science, technology and innovation policy|
Country |
Type |
Description |
|---|---|---|
|
Chile |
Design-led lab |
The Laboratorio de Gobierno is a Chilean state agency under the Ministry of Finance that aims to accelerate public service transformation through collaborative design and a people‑centred approach. Established in 2015, the lab has a team of approximately 25 employees and provides services such as consulting support for public initiatives to adopt experimental trial and error-based innovation methods. The lab has also built a network of public innovators and a platform where public sector officials can share experiences and learn from one another. The lab’s flagship initiative is the Public Innovation Index, which, developed in collaboration with the Inter-American Development Bank, assesses the innovation capacity of public institutions. Based on a survey of 161 public organisations, the index evaluates institutional resources, practices and processes, and collaboration and openness. |
|
Colombia |
Design-led lab |
Bogota’s Public Innovation Lab was established in 2021 through the District Development Plan to foster public sector innovation. With a team of approximately 16 employees, the lab facilitates virtual exchanges with experts through seminars and discussions on topics like funding, procurement and skills development for public sector officials, alongside providing open-access guides to help navigate the various stages of implementing innovative initiatives (including stakeholder mapping and roadmap development). As part of its activities, the lab offers an online database with information on available public innovation training programmes for government officials in Bogotá. |
|
Global |
Evidence-based lab |
The Innovation Growth Lab (IGL), established in 2014, is a policy lab hosted by Nesta (United Kingdom) and the Barcelona School of Economics (Spain). It has a team that provides direct support to public agencies in designing, implementing and scaling experimental approaches, including the use of randomised control trials. Since its creation, IGL has supported over 70 policy experiments across 28 countries (as of July 2025) and has contributed to the establishment of experimentation funds in the United Kingdom and the European Commission. The lab also provides capacity-building services to help public bodies embed experimentation into their institutional frameworks and policymaking cycles. |
Sources: Information on the Laboratorio de Gobierno (Chile) was extracted from Arnold et al. (2023[4]) and Government of Chile (2025[28]); information on the Public Innovation Lab (Colombia) was extracted from OPSI (2018[29]) and Alcaldía Mayor de Bogotá D.C. (2025[30]); information on IGL was extracted from IGL (2025[31]).
Regulatory sandboxes are controlled environments where businesses can test new products, services or business models under relaxed regulations and under the supervision of public authorities (Attrey, Lesher and Lomax, 2020[32]). Their main characteristics are that they are: temporary; use a trial-and-error approach; and involve collaboration and iteration between stakeholders. They originated in the financial services sector, but their use has expanded rapidly to new areas, particularly in highly regulated industries such as transport, energy and health (OECD, 2023[33]). In the context of STI policy, notable examples include AI regulatory sandboxes that enable firms to test machine learning tools under regulatory supervision, and urban mobility sandboxes that support the testing of autonomous or low-emission transport solutions.
Regulatory sandboxes allow policymakers to embed flexibility into policy design by collecting real-world data, identifying potential risks of emerging technologies early on, and adjusting regulations to prevent them (Almeida Shimizu, 2020[34]; Ranchordás, 2021[35]). Additionally, by promoting continuous stakeholder engagement, they enable closer collaboration between regulators and businesses and facilitate quicker responses to market changes. Their popularity results from the recognition that the technologies required to build more sustainable socio‑economic systems and embrace digital futures may be hampered by existing regulatory frameworks. Table 7.4 outlines the key benefits and potential risks associated with the use of regulatory sandboxes and Table 7.5 presents a set of specific examples.
Table 7.4. Benefits and risks of regulatory sandboxes
Copy link to Table 7.4. Benefits and risks of regulatory sandboxes|
Benefits |
Limitations and risks |
|---|---|
|
Early regulation learning: Regulatory sandboxes offer a safe space to experiment with innovative ideas without the full burden of existing regulations. They help identify opportunities and risks associated with new innovations at an early stage. Insights gained can inform legal adjustments, allowing regulators to approve specific innovations based on real-world results. |
Limited duration and scale: Sandboxes often operate on a small scale with a typically limited duration, meaning that they may not be able to test the full potential of certain innovations. Some technologies, especially those in areas like artificial intelligence, blockchain or sustainable energy, need longer time frames and larger user bases to assess their real-world impact, risks and scalability. |
|
Faster innovation deployment: By bridging the gap between experimentation and real-world application, sandboxes speed up the transition of innovations from concept to market. They provide a structured yet flexible environment where businesses, researchers and policymakers can test novel ideas under regulatory oversight but with temporary exemptions or tailored rules. This reduces uncertainty and administrative delays, allowing innovators to refine their solutions based on real-world data before full-scale implementation. |
Competitive imbalances: If sandbox participation does not result in clear regulatory approval or market access, some participants may struggle to move beyond the testing phase. Start‑ups, for example, often lack the resources to scale quickly once the sandbox period ends and depend on regulatory certainty to attract investors and customers. This creates an uneven playing field, where larger companies with more resources have a competitive advantage, ultimately reducing the sandbox’s overall effectiveness in driving inclusive growth. |
|
Enhanced public participation and acceptance: By involving stakeholders in the innovation process, sandboxes create space for dialogue and collaboration, helping to build societal trust in new developments. |
Sources: German Federal Ministry for Economic Affairs and Energy (2025[36]); Didenko (2019[37]).
Table 7.5. Selected examples of regulatory sandboxes for science, technology and innovation policy
Copy link to Table 7.5. Selected examples of regulatory sandboxes for science, technology and innovation policy|
Country |
Description |
||
|---|---|---|---|
|
Denmark |
GreenLab is a green industrial park and research and development (R&D) facility established by the Danish government. It focuses on accelerating innovation in green energy generation, storage and sharing, and facilitating the commercialisation of new green energy solutions. Products of the GreenLab include systems for thermal storage in rocks to share surplus energy between companies in the industrial park; and hydrogen, ammonia, methanol, proteins and methane for use in transport, agriculture, materials, food and energy industries (De Silva et al., 2023[38]). GreenLab has been designated as an official regulatory energy test zone, exempting it from existing electricity regulations to test new solutions for integrating unprecedented amounts of renewable energy into the energy system (GreenLab, 2021[39]). One of GreenLab’s current projects is GreenHyScale, which is exploring the use of pressurised alkaline electrolysis for large-scale onshore and offshore green hydrogen production (IRENA, 2022[40]). |
||
|
Germany |
Launched in 2021, the Digital Test Field on the Federal Waterway Schlei is a regulatory sandbox in Schleswig-Holstein, Germany, co-ordinated by the start-up Unleash Future Boats GmbH. Built as a European test and validation centre for autonomous maritime systems, it operates along a 42-kilometre stretch of the Schlei waterway and serves as a real-world environment to trial zero-emission vessels, digital navigation and connectivity solutions.The company’s ZeroOne boat, the world’s first autonomous and zero-emission boat powered by fuel-cell technology, which is internationally registered and globally insured, is used for testing. Funded under the Federal Ministry for Digital Affairs and Transport, the project won the Federal Ministry for Economic Affairs and Energy’s 2022 Regulatory Sandbox Prize. Beyond testing new technology solutions, the sandbox provides insights regarding system limitations and safety regulations, which can contribute to informing the development of international standards for autonomous and clean inland waterway transport. Other examples of regulatory sandbox initiatives in Germany can be found on the Federal Ministry for Economic Affairs and Energy’s Regulatory Sandbox Innovation Portal (in German). |
||
|
Malaysia |
The National Technology and Innovation Sandbox was announced in June 2020 under the Short-Term Economic Recovery Plan (PENJANA) and launched in August 2020. With a USD 22 573 400 (MYR 100 million) allocation, it supports researchers, innovators and entrepreneurs in testing their products and services in real-world conditions while accessing grants to accelerate commercialisation. By relaxing certain regulatory requirements, the sandbox fast-tracks innovation from R&D to market readiness. Notable pilots include “HelloWorld Robotics”, which developed an autonomous delivery system for transporting goods from merchants to end-users, and “Akar Indah Engineering”, which created a smart waste management system integrating Internet of Things, sensor technology and cloud computing for local fresh markets. |
||
|
Portugal |
Established in 2021, the technological free zones are regulatory sandboxes that provide real or quasi-real environments for testing innovative technologies, products and services. Two such zones are currently operational: Infante D. Henrique, which focuses on testing vehicles or technologies that can operate either with human control (manned) or autonomously/remotely (unmanned), primarily for security and defense applications; and Matosinhos, which aims to position Portugal as a leader in developing and testing innovative mobility solutions for urban carbon neutrality. |
||
Sources: Information on GreenLab (Denmark) was extracted from De Silva et al. (2023[38]); GreenLab (2021[39]); IRENA (2022[40]); EC-OECD (2025[41]). Information on the Digital Test Field on the Federal Waterway Schlei (Germany) was extracted from German Federal Ministry for Economic Affairs and Energy (2025[36]). Information on NTIS (Malaysia) was extracted from Malaysian Ministry of Science, Technology and Innovation (2024[42]). Information for the technological free zones (Portugal) was taken from Portugal's National Innovation Agency (2025[43]).
Assessment methods: Randomised control trials
Randomised control trials are a type of impact evaluation method in which participants (individuals, households, firms, etc.) are randomly assigned to two or more groups (Figure 7.3).
Figure 7.3. Randomised control trials
Copy link to Figure 7.3. Randomised control trials
These typically include one or more treatment groups that receive different versions of an intervention, and a control group, which may receive no intervention or the current standard policy or practice (against which the new intervention is benchmarked). Researchers then compare outcomes between these groups. Because the assignment is random, the treatment and control groups should be similar in all respects except for the intervention received. This allows researchers to attribute any differences in outcomes to the intervention itself, rather than to confounding factors or selection bias (J-PAL, 2023[44]).
RCTs ensure strong monitoring and evaluation by allowing researchers and policymakers to design studies that answer specific questions about a programme’s effectiveness and its underlying economic theory. Beyond determining whether a policy or programme works, RCTs can also identify which components drive success, which version of an intervention is most effective, whether results can be replicated in different contexts and how impact is achieved (or not achieved) (Edovald and Firpo, 2016[45]). In today’s rapidly evolving landscape, where significant investments are being made in green and digital transitions, RCTs can help test new approaches and ensure public resources are allocated efficiently.
RCTs have long been used in clinical research, but have recently gained more widespread application in public policy. Notably, the number of RCTs in innovation, entrepreneurship and business growth has been growing in recent years (Firpo and Phipps, 2019[46]). The IGL Trials Database, which aims to compile all RCTs conducted in this field, included 226 such experiments as of 2022 (Serin et al., 2022[47]). Table 7.7 provides some illustrative examples of how RCTs have been applied in the field of STI policy. Despite their potential, the applicability of these methods in the field of STI policy also faces several limitations, as outlined in Table 7.6.
Table 7.6. Benefits and risks from randomised control trials
Copy link to Table 7.6. Benefits and risks from randomised control trials|
Benefits |
Limitations and risks |
|---|---|
|
Establishing causal effects: Randomised control trials (RCTs) help determine the effectiveness of a policy or intervention. Since participants are randomly assigned to different groups, the only systematic difference between them is the intervention itself, creating a credible counterfactual for comparison. This eliminates biases, including selection bias, where certain groups (e.g. more innovative firms) are more likely to benefit from a policy. |
Limited generalisability and transferability of the results: One of the most frequent criticisms of RCTs is around their low generalisability, meaning that it can often be difficult to transport learnings from an RCT to different contexts. Although trials present the best evidence on the outcomes of an intervention, that evidence is specific to the context in which the intervention was set, and it is not always possible to infer that similar interventions would have the same effect in other environments, or even with a bigger population. |
|
Practical insights for policymakers: RCTs can provide policymakers and innovation programme managers with valuable insights to refine policies or programme design after the trials. By analysing the experiment’s results, they can assess how an intervention was implemented, identify constraints and make necessary adjustments to improve policy effectiveness. |
Limited insights into causal mechanisms: While RCTs help identify whether an intervention works and to what extent, they often provide limited understanding of why it works or not. This is typically left to researchers’ interpretation. However, understanding these mechanisms is critical for policymakers and practitioners to decide whether to adopt, scale or replicate the intervention being tested. |
|
Efficient public spending and government accountability: By establishing causal effects, RCTs provide reliable evidence on whether a programme works, helping to prevent wasteful spending and ensuring public funds are directed toward effective policies. They also enhance government accountability by offering transparent, data-driven justifications for funding decisions. |
Cost and time requirements: Because of operational requirements inherent to their design, notably the random allocation of the policy intervention under investigation, RCTs can be expensive and time-consuming to implement, particularly when dealing with large sample sizes and long-term outcomes. |
|
Fairness and ethical concerns: RCTs can sometimes face ethical constraints, such as the impossibility of denying an intervention to a subset of participants. These relate to deeply rooted moral and legal traditions around the equal treatment standards that are challenged by random selection criteria. |
Source: Based on Edovald and Firpo (2016[45]).
Table 7.7. Selected examples of randomised control trials for science, technology and innovation policy from Horizon 2020
Copy link to Table 7.7. Selected examples of randomised control trials for science, technology and innovation policy from Horizon 2020|
Country |
Description |
|---|---|
|
Italy (co-ordinated by Hub Innovazione Trentino) |
The 200SMEchallenge project, funded through Horizon 2020 with a budget of EUR 499 737, was implemented between 2020 and 2022 to assess whether innovation contests using user-centred design methods could increase small and medium-sized enterprises’ (SMEs) readiness to adopt digital design practices. The project involved running a UX Challenge, a one-week structured design sprint carried out by multidisciplinary teams of students and supported by experts, to help SMEs improve the user experience of their digital products. A randomised control trial (RCT) was conducted with nearly 200 SMEs from 7 European countries. Sixty SMEs were randomly selected to participate in the UX Challenge; the remainder formed the control group. Three weeks after the intervention, treated firms reported significantly higher knowledge of Design Sprint methods (a 19% increase) and practical understanding of user-centred design (a 12% increase). However, no statistically significant differences were found in firms’ short-term intentions to invest in digital design. This suggests that while user-centred design challenges can raise awareness and technical knowledge, their impact on behaviour change may require complementary support to overcome internal organisational and financial barriers. |
|
Lithuania (led by the Lithuanian Innovation Centre) |
The InReady Project, funded through Horizon 2020 with a budget of USD 64 889 (EUR 60 000), was implemented between 2019 and 2021 to support start-ups in enhancing their investment pitches through a structured digital tool (InReady).The tool was tested through an RCT involving 27 start-ups, divided into a control group (which pitched without support) and a treatment group (which used the InReady tool). The evaluation showed that the treatment group significantly improved in areas such as business strategy, market positioning and financial projections, while the control group faced challenges in structuring their pitches and defining their value proposition. |
|
Netherlands (led by the Netherlands Enterprise Agency and Statistics Netherlands) |
The Dutch innovation voucher scheme, implemented in 2004-2005, aimed to stimulate collaboration between SMEs and public knowledge institutes. Vouchers were allocated by lottery, enabling an RCT with over 1 000 firms. By linking the trial to administrative data over a 12-year period, researchers found that treated firms had higher survival rates (4%), greater use of research and development (R&D) tax credits (5%), more R&D activity (12% increase in hours) and a higher employment rate. While productivity gains were not statistically significant across the entire sample, firms that sustained R&D after receiving the voucher did show improvements. This study provides robust evidence that even small-scale interventions can have lasting effects on innovation behaviour, especially when they help firms take their first steps into R&D collaboration. |
|
Spain (led by the Instituto para la Competitividad Empresarial de Castilla y Leon) |
The DIHnamic Project, funded through Horizon 2020 with a budget of USD 536 185 (EUR 496 250), was implemented between 2019 and 2022 and aimed to determine the optimal level of support in digital innovation hubs for SMEs to accelerate their digitalisation processes. To assess the impact of additional support, the project conducted an RCT involving 47 SMEs across 6 digital innovation hubs. SMEs were randomly assigned to 2 groups: a control group (23 SMEs) receiving Service A, which included 20 hours of specialised advice on digitalisation strategies, and a treatment group (24 SMEs) receiving Service B, which included 80 hours of consultancy and hands-on experimentation with digital solutions. The evaluation aimed to determine whether the extra support in Service B led to a significant increase in digital investment and maturity. However, results showed no statistically significant difference between the two groups in the analysed dimensions, suggesting that additional support did not accelerate digitalisation beyond the standard advisory service. |
Sources: Examples were taken from the Innovation Growth Lab trials dataset (IGL, 2024[48]). Information on InReady (Lithuania) was also extracted from European Commission (2024[49]) and for DIHnamic (Spain) information was taken from European Commission (2024[50]).
Challenges and policy responses to support wider policy agility
Copy link to Challenges and policy responses to support wider policy agilityDespite their benefits, the use of policy intelligence and policy experimentation face several challenges, which helps explain why they are not more widely used. This section discusses these challenges.
Challenges and opportunities for embedding strategic intelligence and policy experimentation
While policy experimentation and various tools for strategic intelligence can provide important returns on investment, with respect to the resources spent on such activities, several challenges have to be addressed for successful implementation, including:
Building public sector capacities and skills: Policy experimentation for STI can be challenging for the public sector, and officials may need new training and capacities to play a role as an incubator and accelerator of new experimental approaches to policy. The ability to design and deliver public services in new ways, combined with a user‑centric focus on how industry and consumers benefit from them, are important skills for innovation and experimentation in STI policy development. Recruiting staff with diverse skills and profiles into the public sector contributes to this, including staff with scientific and entrepreneurial backgrounds. With regards to strategic intelligence, a key challenge is the uptake of the results of strategic intelligence activities into STI decision making. This may require fit-for-purpose institutional capacities and structures as well as the skills needed to interpret such results. This is discussed in more detail below.
Overcoming power dynamics and structural barriers limiting the integration of evidence in policymaking: Intelligence has no practical meaning unless it can be actioned and used. Likewise, learnings from policy experimentation must also be recognised as legitimate and integrated into policymaking processes to inform policy. This poses a major challenge. For example, as earlier OECD work has shown (Robinson, Winickoff and Kreiling, 2023[2]), strategic intelligence is often developed by a neutral “honest broker” at a distance from the decision-making process – therefore, independent and trustworthy. Similarly, those engaged in policy experimentation are often engaged in advisory or analytic roles but remain peripheral to the core decision-making space. However, for agile policy cycles and the greatest impact, such intelligence would benefit from being conducted as part of the policymaking process and with the involvement of decision makers. Bridging this disconnect requires deliberate integration of strategic intelligence and experimentation functions into the political and strategic centres of decision making. One approach to resolve this is to build best practices in agile intelligence production close to, or conducted by, policymaking institutions.
Vested interests and established networks of incumbent actors can pose a further barrier to evidence uptake. Even well-substantiated recommendations may be ignored if they threaten existing systems. These dynamics may significantly limit the potential for strategic intelligence and experimentation to influence decision-making processes.
Enhancing the legitimacy of strategic intelligence and policy experimentation: Mainstreaming experimentation in STI policy requires governments to create an environment where testing new approaches is not only accepted but actively encouraged. Providing clear mandates, adequate funding and institutional backing ensures that actors have the means and authority to drive experimentation forward. This means ensuring that policymakers and institutions have the support and resources to experiment.
Political legitimacy plays a key role in building trust among stakeholders by showing that experimentation is deliberate and transparent, and that strategic intelligence provides for an evidence-based process aimed at improving policies. Clear and proactive communication about the goals, processes and outcomes of these efforts helps reinforce that legitimacy. Establishing rigorous yet adaptive evaluation frameworks – and openly learning from both successes and failures – further strengthens accountability and public trust in innovative approaches.
Embedding agility while ensuring robust monitoring and evaluation: Incorporating iterative learning and regular assessment into STI policy implementation helps determine whether initiatives are effectively achieving their goals. This enables timely identification of what works, what does not, and when to scale up or discontinue initiatives. To support this, experimental structures should remain reversible – so that they can be discontinued without major disruption if they prove unsuccessful – and adaptable to lessons learnt during implementation. However, over time, vested interests can form around certain initiatives, making it harder to make changes. Experimentation as part of a broader portfolio of support actions can enhance flexibility (BMWK, 2025[51]).
Constraints on risk-taking required for experimentation in public policy
Several factors constrain risk-taking required for experimentation in public policy. These include limitations on the use of public funding for experimental approaches in policy. For example, there might be constraints on the random disbursement of public funding – as required for RCTs – or rigid criteria and long processes to apply for new funding instruments – as would be required for more agile approaches to policymaking, such as piloting at a small scale and deciding on that basis whether to expand or downscale those initiatives.
Spending taxpayer money on initiatives with uncertain outcomes and no guaranteed results is a valid concern. Accountability and checks on spending are similarly important as are other oversight mechanisms to prevent the misuse of public resources.
At the same time, it is important to avoid “false efficiency” – a system that appears to be cost‑effective in the short term but ultimately stifles innovation by rejecting the “good waste” that comes with testing and learning (Potts, 2009[52]). Potential benefits of experimentation are often abstract, uncertain and shared across multiple stakeholders. In contrast, risks of failed experimentation are often specific, measurable and directly linked to individual decisions (Torugsa and Arundel, 2017[53]; Ritchie, 2014[54]). This creates a bias where failures stand out more than successes, making public servants more risk-averse and less likely to adopt innovative approaches.
Enabling policy experimentation will require addressing those constraints, including by ensuring the transparency, accountability and pay-offs of these experimental policy initiatives; adopting portfolio assessments of policy packages; engaging in efforts to ease bureaucratic hurdles; and adopting regulatory adjustments. Importantly, being transparent as to policy experiments, including by submitting them to rigorous assessments, contributes to reducing the risk of spending public resources poorly. This is where policy experimentation in evaluation and monitoring itself can help.
More complicated is the notion of dealing with new policy initiatives that may or may not succeed, such as policy tools for breakthrough innovations that have higher risks of failure. What is essential is to identify ways to evaluate a policy portfolio’s overall success – rather than looking for every single one to succeed.
Bureaucratic and regulatory procedures should be another continued target for assessments. While they play a vital role in safeguarding key principles such as accountability, compliance, transparency, stability and risk minimisation, they can also challenge the flexibility needed for implementing agile policy approaches. This includes laws and regulations that limit or prohibit policy experimentation, as well as lengthy approval cycles and rigid budgeting mechanisms.
Finally, experimentation might also lead to saving funds. Rigorously evaluating policies with experimental methods, such as RCTs, provides evidence on what works and what doesn’t, helping governments spend limited public funds more effectively. Raising greater awareness of the benefits of such methods as part of monitoring and evaluation processes supports responsible public spending. The higher costs of setting up new more rigorous assessments may well be justified and also decrease over time with more experience in conducting and applying them.
Incentive structures and capacities
Building capacities to support public sector officials in implementing innovative policy approaches is another key factor holding back further progress (OECD, 2024[20]). This includes ensuring that a range of disciplinary backgrounds and forms of expertise are represented within public administrations, including expertise from industry and diverse scientific fields, ranging from social sciences and legal backgrounds to engineering and natural sciences. Moreover, offering specific training on policy experimentation and evaluation methods to public administrators and civil servants can help by illustrating how they work and clearing up misconceptions about their use. Other core skills for public sector innovation include data literacy for evidence-informed decision making and storytelling to effectively communicate ideas (Figure 7.4).
Figure 7.4. Core skills for public sector innovation
Copy link to Figure 7.4. Core skills for public sector innovationMoreover, the adoption of more user-centric approaches can help improve how policies respond to the evolving needs of the users of these services (see Figure 7.4). This involves systematically assessing whether proposed projects, policies or services meet users’ needs as part of the policy approval process. Adequate resources and time must be allocated to understanding and analysing these needs, as well as conducting regular research and testing to ensure policies remain relevant (OECD, 2017[55]). This is particularly important for policy experimentation, where iterative testing and feedback loops are crucial to refining and improving initiatives. Without meaningful engagement, experimental policies risk being misaligned with real-world needs and may fail to gain public trust and adoption. To build trust and ensure impactful participation, it is preferable for policymakers to prioritise a few well-designed engagement processes with higher policy impacts rather than spreading efforts across numerous low-impact processes organised as “tick-the-box” formalities (OECD, 2024[56]; Paunov and Planes-Satorra, 2023[16]). Poorly executed engagement risks disappointing participants and eroding trust in government.
Beyond capacities, there is the key imperative of creating a culture for innovation, where incentives are provided for engaging in experimental policy approaches, with an experimentation-prone system that encourages public servants to embrace agile policy approaches and use new digital tools for data collection and analysis (Arnold et al., 2023[4]). This requires exploring public officials’ incentives as regards policy experimentation, which largely depend on the performance assessment and employment promotion dynamics in place and the internal hierarchies and the opportunities these provide for bottom-up initiatives. Champions within the public sector – whether senior leaders, analysts or programme managers – can be instrumental in shaping this culture by creating protected spaces for learning, even in systems with limited formal incentives.
Top-level endorsement of policy experimentation – as illustrated for Canada in Box 7.7 – is also of paramount importance in building and institutionalising a culture of experimentation. Canada’s “Experimentation Direction for Deputy Heads” gives government departments a mandate to allocate a portion of programme funds for experimentation and create clear processes for evaluating and integrating lessons from experiments into new programmes. Finland established the Experimental Finland initiative (OECD, 2017[57]), which encourages and supports line ministries to undertake policy experiments by means of explicit top-level endorsement.
Establishing cross-sectoral governance mechanisms to jointly learn about policy experimentation, such as by using centralised databases, can also be used to track experiments, share results and minimise duplication of efforts. Improving co-ordination helps reduce the risk of “projectification”, which occurs when too much focus is placed on small-scale pilots, leading to fragmented efforts and making it harder to scale successful initiatives due to limited resources and capacity (OECD, 2024[20]).
Box 7.7. Canada’s approach to supporting policy experimentation
Copy link to Box 7.7. Canada’s approach to supporting policy experimentationSince 2015, Canada has adopted a new governance approach that promotes public sector innovation, with a strong focus on encouraging federal departments to experiment with new methods to enhance policymaking. To support this, the government has launched several programmes to overcome barriers to policy experimentation and expand its use. A subset of these initiatives is outlined below.
Institutionalising policy experimentation
The Impact Canada initiative, launched in 2017, seeks to promote the adoption of innovative approaches by supporting government departments in designing and evaluating projects using prizes, challenges, micro-funding and other outcome-based approaches.
A key achievement of this initiative has been improving access to information for policymakers and other stakeholders through clear, accessible materials on policy experimentation. As part of this effort, the Canadian government developed the Measuring Impact by Design: A Guide to Methods for Impact Measurement, which seeks to promote the use of experimental and quasi-experimental approaches across the country. The guide demonstrates that, with proper planning, most programmes can integrate experimental impact evaluation methods with minimal or no disruption to their normal operations.
The Experimentation Direction for Deputy Heads is another important framework introduced under Impact Canada. This document reinforces the government’s commitment to allocating a fixed percentage of programme funds for testing new approaches and provides guidance for deputy heads on implementing this commitment.
To address these challenges, Canada launched the Experimentation Works initiative in 2018 to train public servants in experimental methods. The initiative used a hands-on, “learning-by-doing” approach, offering accessible learning modules, supportive tools and an “experimenting in the open” model that encouraged transparency and collaboration. A key feature of the initiative was the support provided to five small-scale, department-led experiments, designed and implemented by public servants (see detailed descriptions here). By guiding these experiments from start to finish, Experimentation Works strengthened practical understanding of experimentation, demonstrating its value and generating concrete examples of federal experiments.
Sources: Government of Canada (2024[58]); OPSI (2018[29]).
Institutionalising experimentation and strategic intelligence
Institutionalising experimentation and strategic intelligence production and use can support broader uptake by embedding it into national programmes and frameworks. It requires that governments create an environment where testing new approaches is not only accepted but actively encouraged. This involves:
Facilitating access to information: Providing policymakers with clear, accessible materials on policy experimentation and strategic intelligence approaches. These resources could explain different types of experimentation (e.g. sandboxes and RCTs) and strategic intelligence methods, their distinct roles, and how they contribute. This includes developing guidelines and frameworks that clarify their impact and ensuring their integration into national strategies.
Building a well-defined roadmap: Outlining clear objectives, identifying key areas for experimentation and establishing mechanisms for scaling successful initiatives. This would include defining success criteria, setting benchmarks for progress and ensuring continuous evaluation (OECD, 2024[20]).
Securing long-term political and financial support: Embedding policy experimentation and strategic intelligence into national budgets and legislative frameworks. This involves creating dedicated funding streams, addressing administrative hurdles, fostering cross-sector collaboration and ensuring high-level political commitment to sustain experimentation beyond political cycles.
Learning from combinatorial approaches
In some cases, strategic intelligence and policy experimentation approaches are used in combination. For example, in July 2024, the UK Regulatory Innovation Office was set up as a pro-innovation governance unit to facilitate rapid deployment of innovation underpinned by responsible innovation. It is built around three main pillars: 1) a knowledge pillar mobilises strategic intelligence to better understand the evolving nature of technology areas and appropriate metrics to support strategic decision making over time; 2) a strategic pillar establishes priorities, particularly industrial priorities, by developing an agile and responsive system that can develop and deliver the governance required for these new technologies; and 3) a capability pillar enables institutional reform and builds the regulatory skills required to identify and respond to the significant economic and societal changes that emerging technologies may bring. This and other examples reveal the added value of combining these different approaches into a coherent programme of activities to support STI policy.
Conclusions
Copy link to ConclusionsIncremental policy cycles as shown in Figure 7.1 provide stability for the public, strategic orientation for industrial stakeholders to align with, and for patient and long-term investment. However, as outlined in this chapter, there are circumstances where agile policy making holds promise – in times of urgency, whether planned (driving forward technological innovation to improve competitiveness or solve societal challenges) or unplanned (reacting to crises, for example pandemics or the ramifications of war and other conflicts).
Building capacity for informed and agile policymaking requires experimentation and strategic intelligence. Anticipating, testing and modulating policies in real-world conditions helps identify what works, what doesn’t and where improvements are needed. Integrating various strategic intelligence tools to feed into the support actions for agility can help create a culture of anticipation and of learning while doing while increasing flexibility and adaptability within bureaucratic structures (through better co-ordination mechanisms and simplified processes). This flexibility and adaptability can reduce the cost and complexity of launching experiments.
Multiple actions can foster and accelerate an agile and adaptive culture among policymakers. These include institutionalising policy experimentation by embedding it into national programmes and frameworks to help overcome fear of failure or political consequences that often make public administrations hesitant to innovate. Additionally, training programmes help build capacities in the public sector to leverage strategic intelligence and use policy experimentation.
Overcoming these challenges requires substantive rethinking of incentive schemes. In the case of policy experimentation, for instance, moving from successful small experiments to phasing out failures or expanding successes is not a given. Acknowledging failure is often discouraged due to misaligned incentives, while scaling success can be hindered by limited financial resources and legal or regulatory complexities that emerge when moving toward wider implementation (OECD, 2024[20]).
As regards strategic intelligence, the application of these tools needs to be cognisant of the absence of hard evidence, particularly in the context of high uncertainty and complexity in the field of rapidly emerging and evolving technologies. Understanding these limitations and focusing on learning matter for a more robust use of these tools to the benefit of agile STI policy.
This chapter has presented experiments in agile strategic intelligence and policy experimentation. Together the insights presented build an ideal picture of an agile and intelligence driven policymaking process. However, the realities of the daily work of those in public administrations, their practices and institutional constraints must not be ignored. While there are promising approaches to strategic intelligence production and policy experimentation, a large array of challenges remain. The opportunity remains to explore further these challenges and gather additional insights from policy experimentation and the use of strategic intelligence.
References
[14] Abadie, F., M. Friedewald and K. Weber (2010), “Adaptive foresight in the creative content industries: Anticipating value chain transformations and need for policy action”, Science and Public Policy, Vol. 37/1, pp. 19-30, https://doi.org/10.3152/030234210X484793.
[30] Alcaldía Mayor de Bogotá D.C. (2025), “Laboratorio de Innovación Pública de Bogotá”, web page, https://ibo.bogota.gov.co.
[34] Almeida Shimizu, J. (2020), Innovation Assessment in Regulatory Sandboxes, Munich Intellectual Property Law Center.
[4] Arnold, E. et al. (2023), “Navigating green and digital transitions: Five imperatives for effective STI policy”, OECD Science, Technology and Industry Policy Papers, No. 162, OECD Publishing, Paris, https://doi.org/10.1787/dffb0747-en.
[32] Attrey, A., M. Lesher and C. Lomax (2020), “The role of sandboxes in promoting flexibility and innovation in the digital age”, OECD Going Digital Toolkit Notes, No. 2, OECD Publishing, Paris, https://doi.org/10.1787/cdf5ed45-en.
[23] Bellefontaine, T. (2012), Innovation Labs: Bridging Think Tanks and Do Tanks, Policy Horizons Canada, Ottawa, Ontario, https://publications.gc.ca/site/eng/432058/publication.html.
[51] BMWK (2025), “Regulatory sandboxes: Testing environments for innovation and regulation”, web page, https://www.bmwk.de/Redaktion/EN/Dossier/regulatory-sandboxes.html.
[21] Bravo-Biosca, A. (2019), “Experimental innovation policy”, NBER Working Paper Series, No. 26273, National Bureau of Economic Research, Cambridge, MA, https://www.nber.org/system/files/working_papers/w26273/w26273.pdf.
[6] Cairney, P. (2012), Understanding Public Policy: Theories and Issues, Palgrave Macmillan, London.
[19] CIHR (2021), Strategic Plan 2021-2026, Canadian Institutes of Health Research, https://cihr-irsc.gc.ca/e/52481.html#section_5 (accessed on 25 March 2025).
[27] Damgaard, B. and J. Lewis (2014), “Accountability and citizen participation”, in The Oxford Handbook of Public Accountability, pp. 258-272, Oxford University Press, https://books.google.fr/books?hl=en&lr=&id=aaecAwAAQBAJ&oi=fnd&pg=RA1-PT221&ots=g7suNP0Rvc&sig=HmjoRLrkaP3Z3jorLVZFPk23iuA&redir_esc=y#v=onepage&q&f=false.
[38] De Silva, M. et al. (2023), “Unlocking co-creation for green innovation: An exploration of the diverse contributions of universities”, OECD Science, Technology and Industry Policy Papers, No. 163, OECD Publishing, Paris, https://doi.org/10.1787/b887f436-en.
[37] Didenko, A. (2019), Regulatory Sandbox Concept: Models, Benefits and Risks, https://events.development.asia/materials/20190416/regulatory-sandbox-concept-models-benefits-and-risks.
[41] EC-OECD (2025), STIP Compass: International Database on Science, Technology and Innovation Policy (STIP), https://stip.oecd.org (accessed on 24 March 2025).
[49] European Commission (2024), “Designing the Service to Improve the Investor Readiness of Start-ups”, web page, https://cordis.europa.eu/project/id/824208.
[50] European Commission (2024), “Digital Innovation Hubs: Dynamic Facilitation and Thrust from Regional Innovation Agencies”, web page, https://cordis.europa.eu/project/id/824186.
[15] European Commission (2023), Foresight on Demand: Foresight Towards the 2nd Strategic Plan for Horizon Europe, Publications Office of the European Union, https://doi.org/10.2777/77971.
[46] Firpo, T. and J. Phipps (2019), “New running experiments in innovation and growth policy: What can we learn from recent experience?”, Journal for Research and Technology Policy Evaluation, Vol. 47, pp. 46-50, https://repository.fteval.at/id/eprint/414.
[36] German Federal Ministry for Economic Affairs and Energy (2025), “Regulatory sandboxes: Testing environments for innovation and regulation”, web page, https://www.bundeswirtschaftsministerium.de/Redaktion/EN/Dossier/regulatory-sandboxes.html.
[58] Government of Canada (2024), “About us – What is Impact Canada”, web page, https://impact.canada.ca/en/about.
[28] Government of Chile (2025), “Laboratorio de Gobierno”, web page, https://www.lab.gob.cl/que-es-el-lab.
[39] GreenLab (2021), “GreenLab designated regulatory test zone”, web page, https://www.greenlab.dk/knowledge/test-zone-designation-paves-the-way-for-the-use-of-green-power.
[7] Haddad, C. et al. (2022), “Transformative innovation policy: A systematic review”, Environmental Innovation and Societal Transitions, Vol. 43, pp. 14-40, https://doi.org/10.1016/j.eist.2022.03.002.
[9] IEA (2023), Net Zero by 2050: A Roadmap for the Global Energy Sector, International Energy Agency, Paris, https://www.iea.org/reports/net-zero-roadmap-a-global-pathway-to-keep-the-15-0c-goal-in-reach.
[31] IGL (2025), “About IGL”, web page, https://www.innovationgrowthlab.org/about.
[48] IGL (2024), IGL Trials Database, https://www.innovationgrowthlab.org/igl-database-v2.
[40] IRENA (2022), “Regulatory sandboxes”, web page, https://www.irena.org/Innovation-landscape-for-smart-electrification/Power-to-hydrogen/20-Regulatory-sandboxes.
[44] J-PAL (2023), “Introduction to randomized evaluations”, https://www.povertyactionlab.org/resource/introduction-randomized-evaluations.
[26] Lewis, J. (2020), “The limits of policy labs: Characteristics, opportunities and constraints”, Policy Design and Practice, Vol. 4/2, pp. 242-251, https://doi.org/10.1080/25741292.2020.1859077.
[42] Malaysian Ministry of Science, Technology and Innovation (2024), National Technology & Innovation Sandbox website, https://sandbox.gov.my/sandbox-partners?tab=glance.
[18] MCTI (2021), Summary of the Brazilian Artificial Intelligence Strategy (EBIA), Brazilian Ministry of Science, Technology and Innovations, https://www.oecd.org/en/publications/access-to-public-research-data-toolkit_a12e8998-en/brazilian-strategy-for-artificial-intelligence_936c5793-en.html.
[25] Monteiro, B. and B. Kumpf (2023), “Innovation labs through the looking glass: Experiences across the globe”, OPSI Blog, https://oecd-opsi.org/blog/innovation-labs-through-the-looking-glass.
[12] NISTEP (2019), The 11th Science and Technology Foresight: S&T Foresight 2019 – Summary Report, National Institute of Science and Technology Policy, Japan, https://doi.org/10.15108/nr183 (accessed on 24 March 2025).
[8] OECD (2024), Declaration on Transformative Science, Technology and Innovation Policies for a Sustainable and Inclusive Future, OECD, Paris, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0501.
[10] OECD (2024), “Framework for Anticipatory Governance of Emerging Technologies”, OECD Science, Technology and Industry Policy Papers, No. 165, OECD Publishing, Paris, https://doi.org/10.1787/0248ead5-en.
[20] OECD (2024), “How to best use STI policy experimentation to support transitions?”, OECD Policy Briefs, OECD Publishing, Paris, https://www.oecd.org/en/publications/how-to-best-use-sti-policy-experimentation-to-support-transitions_7b246309-en.html.
[3] OECD (2024), “OECD Agenda for Transformative Science, Technology and Innovation Policies”, OECD Science, Technology and Industry Policy Papers, OECD Publishing, Paris, https://doi.org/10.1787/ba2aaf7b-en.
[17] OECD (2024), “OECD Agenda for Transformative Science, Technology and Innovation Policies”, OECD Science, Technology and Industry Policy Papers, No. 164, OECD Publishing, Paris, https://doi.org/10.1787/ba2aaf7b-en.
[56] OECD (2024), “Stakeholder engagement and collaboration in STI for the green transition”, OECD Policy Briefs, OECD Publishing, Paris, https://www.oecd.org/en/publications/stakeholder-engagement-and-collaboration-in-sti-for-the-green-transition_80de5e48-en.html.
[33] OECD (2023), “Regulatory sandboxes in artificial intelligence”, Digital Economy Papers, No. 356, OECD Publishing, Paris, https://www.oecd.org/en/publications/regulatory-sandboxes-in-artificial-intelligence_8f80a0e6-en.html.
[55] OECD (2017), Core Skills for Public Sector Innovation: A Beta Model of Skills to Promote and Enable Innovation in Public Sector Organisations, OECD, Paris, https://oecd-opsi.org/wp-content/uploads/2018/07/OECD_OPSI-core_skills_for_public_sector_innovation-201704.pdf.
[57] OECD (2017), Systems Approaches to Public Sector Challenges: Working with Change, OECD Publishing, Paris, https://doi.org/10.1787/9789264279865-en.
[29] OPSI (2018), “Experimentation Works (EW)”, web page, https://oecd-opsi.org/innovations/experimentation-works-ew.
[16] Paunov, C. and S. Planes-Satorra (2023), “Engaging citizens in innovation policy: Why, when, and how?”, OECD Science, Technology and Industry Policy Papers, No. 149, OECD Publishing, Paris, https://doi.org/10.1787/ba068fa6-en.
[22] Paunov, C. and S. Planes-Satorra (2023), “Engaging citizens in innovation policy: Why, when and how?”, OECD Science, Technology and Industry Policy Papers, No. 149, OECD Publishing, Paris, https://doi.org/10.1787/ba068fa6-en.
[43] Portugal’s National Innovation Agency (2025), “Technological free zones”, web page, https://ani.pt/en/technological-free-zones.
[52] Potts, J. (2009), “The innovation deficit in public services: The curious problem of too much efficiency and not enough waste and failure”, Innovation, Vol. 11, pp. 34-43, https://doi.org/10.5172/impp.453.11.1.34.
[13] PRIMA Québec (2024), Advanced Materials: Quebec Innovation in Action, PRIMA Québec, https://www.prima.ca/wp-content/uploads/2024/06/PRIMA-24-109_Portrait_2024_EN_V2-1.pdf.
[35] Ranchordás, S. (2021), “Experimental lawmaking in the EU: Regulatory sandboxes”, University of Groningen Faculty of Law Research Paper Series, No. 12/2021, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3963810.
[54] Ritchie, F. (2014), “Resistance to change in government: Risk, inertia and incentives”, Economics Working Paper Series, No. 1412, University of the West of England, Bristol, https://d1wqtxts1xzle7.cloudfront.net/36186546/1412-libre.pdf?1420662072=&response-content-disposition=inline%3B+filename%3DResistance_to_change_in_government_risk.pdf&Expires=1740473172&Signature=RSHy~TDWuaC-~ZMlvxYQNyEHeIfkC1KfKRn356ILF~OP4Lb1Tvhinh4q70.
[45] Roberts, I. (ed.) (2016), Running Randomised Controlled Trials in Innovation, Entrepreneurship and Growth: An Introductory Guide, Innovation Growth Lab, https://media.nesta.org.uk/documents/a_guide_to_rcts_-_igl_09aKzWa.pdf.
[11] Robinson, D. and D. Doherty (2025), “Strategic intelligence tools for emerging technology governance: A policy primer”, OECD Science, Technology and Industry Working Papers, No. 2025/22, OECD Publishing, Paris, https://doi.org/10.1787/02c05775-en.
[1] Robinson, D. et al. (2021), “Policy lensing of future-oriented strategic intelligence: An experiment”, Technological Forecasting and Social Change, Vol. 169, p. 120803, https://doi.org/10.1016/j.techfore.2021.120803.
[2] Robinson, D., D. Winickoff and L. Kreiling (2023), “Technology assessment for emergying technology: Meeting new demands for strategic intelligence”, Science, Technology and Industry Policy Papers, No. 146, OECD Publishing, Paris, https://doi.org/10.1787/e738fcdf-en.
[47] Serin, E. et al. (2022), Randomised Controlled Trials: Can They Inform the Development of Green Innovation Policies in the UK?, Grantham Research Institute on Climate Change and the Environment and Centre for Climate Change Economics and Policy, London, https://www.lse.ac.uk/granthaminstitute/wp-content/uploads/2022/10/Randomised-control-trials_Can-they-inform-the-development-of-green-innovation-policy-in-the-UK-1.pdf.
[53] Torugsa, N. and A. Arundel (2017), “Rethinking the effect of risk aversion on the benefits of service innovations in public administration agencies”, Research Policy, Vol. 46/5, pp. 900-910, https://doi.org/10.1016/j.respol.2017.03.009.
[5] Weber, M. et al. (2021), “Agilität in der F&I-Politik. Konzept, Definition, Operationalisierung”, Studie zum deutschen Innovationssystem, Vol. Nr. 8-2021, https://www.e-fi.de/fileadmin/Assets/Studien/2021/StuDIS_08_2021.pdf.
[24] Wellstead, A. (2020), Policy Innovation Labs, Springer, Cham, https://doi.org/10.1007/978-3-319-31816-5_4000-1.