| What are the issues
Technological innovation is a major engine of human well-being and economic activity. However, technology raises concerns for individuals and societies, as witnessed in previous waves of technological change in industry and in current debates around artificial intelligence, nuclear power, gene editing and social media. Reaping the benefits of emerging technologies while preventing or mitigating potential negative effects, is a critical challenge for Science Technology and Innovation (STI) policy.
It is partly through governance that scientific, entrepreneurial and policy communities seek to manage the risks and benefits of technology. Here “governance” does not refer just to regulation, but to a multitude of institutional and normative mechanisms to steer technology development, such as:
The governance of emerging technologies, however, poses a well-known puzzle: the so-called Collingridge dilemma holds that early in the innovation process — when interventions and course corrections might still prove easy and cheap — the full consequences of the technology and hence the need for change might not be fully apparent (Collingridge 1980). Conversely, when the need for intervention becomes apparent, changing course may become expensive, difficult and time- consuming. Uncertainty and lock-ins are at the heart of many governance debates and continue to pose questions about “opening up” and “closing down” development trajectories.
Technology governance can be defined as the process of exercising political, economic and administrative authority in the development, diffusion and operation of technology in societies. It can consist of norms (e.g. regulations, standards and customs), but can also be operationalised through physical and virtual architectures that manage risks and benefits. Technology governance pertains to formal government activities, but also to the activities of firms, civil society organisations and communities of practice. In its broadest sense, it represents the sum of the many ways in which individuals and organisations shape technology and how, conversely, technology shapes social order.
Getting technology governance right is critical, but also a challenge
Many of the barriers to enabling emerging technologies lie not in technology per se, but in technology governance. For some, governance is too complex and onerous. For others, governance systems fail to protect key human values, leading to a crisis of public trust in technology. For still others, governance fails to produce the necessary alignment of technology development with the largest human goals. In such conditions of uncertainty, traditional regulatory instruments – e.g. risk assessment, product-based standard-setting, export controls and liability – tend to narrowly focus on immediate or readily quantifiable consequences and their management, or enter only after key decisions about technology design have been made. Yet, many of the issues raised by emerging technologies are more fundamental and long-term. A case in point is artificial intelligence (AI), the impacts of which could be major, ubiquitous and uncertain. Another case is neurotechnology, where embedded devices and brain-computer interfaces are subject to existing safety and efficacy regimes, but these regimes may not address long-term ethical questions about protecting human agency and mental privacy.
Read about effective governance of AI and its particular challenges.
There is a persistent but misguided view that resistance to technology mostly stems from public ignorance about the benefits of particular technologies or of innovation in general. Social science research shows that more important reasons for such resistance might be basic value conflicts, distributive concerns and failures of trust in governing institutions such as regulatory authorities and bodies giving technical advice. In general, countries and innovators should take into account, to the greatest extent possible, social goals and concerns from the beginning of the development process.
| What’s the context ?
While essential for addressing some of society’s most pressing challenges, innovation can also have negative consequences for individuals and societies, as witnessed in previous waves of industrial revolution or in current debates around digitization, data privacy and artificial intelligence. Indeed, the profound and ambiguous societal implications of emerging technologies bring them to the forefront of popular media and political debate. For example:
|What needs to be done?
Several emerging approaches in science policy seek to overcome the Collingridge dilemma referred to above by engaging concerns with technology governance “upstream”. Process governance shifts the locus from managing the risks of technological products to managing the innovation process itself: who, when, what and how. It aims to anticipate concerns early on, address them through open and inclusive processes, and steer the innovation trajectory in a desirable direction.
The key idea is to make the innovation process more anticipatory, inclusive and purposive (see figure), which will inject public good considerations into innovation dynamics and ensure that social goals, values and concerns are integrated as they unfold.
Governance mechanisms – if designed well – can enable “responsible innovation”: a kind of innovation that is more productive, responsive, and socially robust. While it remains a challenge to realise this goal, best practices have emerged that can serve as a guide.
These include funding social science and humanities in an integrated fashion with natural and physical science, using participatory forms of foresight and technology assessment to chart desirable futures, and engaging stakeholders in communicative processes with clear links to policy. Some have called this an “anticipatory governance” approach.
Read about engaging closely with the private sector.
The idea of anticipatory governance is to provide an opportunity to work as productively and pragmatically as possible within the confines of the so-called Collingridge dilemma. To do so, it envisions building three capacities: anticipation, or foresight; integration across disciplines; and public engagement. Building these capacities, both in traditional innovation organisations (like universities and private firms), as well as across society more broadly (in non-governmental organisations and public education), can help create a reflexive approach to innovation that will constantly be re examining its public purpose and its ability to facilitate responsible changes in society.
Anticipatory governance recognises that at least two changes from current thinking are crucial. One is that governance is not just something that happens in governing institutions like legislatures, courts and regulatory agencies, but that it also happens through the interaction of users with new technologies and through the creative choices that researchers make in laboratories. This “jurisdictional” change means that the bounds of expertise must be expanded from traditional modes, bringing experts in governance into conversation with lab researchers and bringing lay citizens into the conversation altogether.
Two is that anticipation is not about predicting a future state of an innovation, but rather, it is about asking questions about plausible futures so that we may act in the present to help bring about the kind of futures we decide we want. This “temporal” change means that people from many different backgrounds need to work together to imagine futures and begin to build pathways towards them in the present. Neither of these changes resolves the Collingridge dilemma, but together, they give us the best hope of living within it.
David Guston, Foundation Professor and founding director of the School for the Future of Innovation in Society, Arizona State University, USA.
| What are countries doing?
Several recent trends – some governmental and some market-driven – in the governance of emerging technologies are taking an anticipatory approach. Three approaches in particular for “upstream” innovation governance – participatory agenda-setting, co-creation and test beds, and value-based design and standardisation – show the greatest promise.
The EC-OECD STIP Compass contains information on more than one hundred policy initiatives that address the ethics of emerging technologies.
Find out more by clicking on the interactive chart.
| Did you know?
Many governments are building governance considerations into mainstream STI policy. Two prominent examples include the OECD Council Recommendation on Responsible Innovation in Neurotechnology and the OECD Council Recommendation on Artificial Intelligence. These instruments are emblematic of how OECD countries perceive the need both to innovate more, and to innovate well, making the innovation process more goal-oriented, inclusive and anticipatory. Engaging governance within the innovation process has the potential to embed public good considerations into technologies. See here an example of good practice.
The OECD Recommendation on Responsible Innovation in Neurotechnology is the first international standard in this domain. It aims to guide governments and innovators to anticipate and address the ethical, legal and social challenges raised by novel neurotechnologies while promoting innovation in the field.
The OECD Principles on Artificial Intelligence promote AI that is innovative and trustworthy and that respects human rights and democratic values. They set standards for AI that are practical and flexible enough to stand the test of time in a rapidly evolving field.