Share

Technology governance

Effective governance of AI and its particular challenges

 

The governance of AI is complicated by the number of institutions and policy themes it affects

Compared to most other technologies, more trade-offs have to be managed in terms of resource allocation and goal setting, more constituencies have to be satisfied, and more potential instances of policy incoherence have to be foreseen and averted.

 

AI is considered a strategic technology, one in which major powers are actively aiming to attain competitive advantage

A technology race could lead countries most influential in developing AI to sacrifice governance aims in pursuit of leadership positions. Changes brought by AI could dwarf those of previous technological revolutions, from raising labour productivity and living standards, improving services in health and education, to accelerating science – among the positive examples – to profoundly disrupting labour markets.

 

AI is constantly evolving, with a timing and direction that are largely unpredictable

New developments in AI technologies could directly affect the priorities or need for public policy. For instance: over the past two years a new generation of AI-enabled malware has come about. One example of this – DeepLocker – can hide in benign carrier applications, such as video conference software, avoid detection by most antivirus and malware softwares, and target individual computer users through facial recognition, geolocation, voice recognition, and other parameters (Stoecklin, 2018).

 

The public’s relationship to AI differs from that of any other technology

Experiments show that subjects unconsciously anthropomorphise AI and robots (Fussell et al, 2008), opening ground for AI-related anxieties and hyperbole.

 

There is no consensus on how to operationalise some of the main governance objectives set out in the recent proliferation of public and private AI recommendations and principles (of which there are now many tens). For example:

  • Achieving ‘ethical AI’ is an almost universal statement of intent. However, a principles-based AI ethical framework involves practical uncertainties. For instance, unlike the medical profession - the model of a principles-guided profession that has inspired AI ethics recommendations - there is no comparable professional body of AI practitioners guided by common goals and fiduciary duties.
  • Governance recommendations often aim to guide AI research in helpful or ‘human-centred’ directions. However, this is difficult, if not impossible, to enforce, at least for corporate R&D. Moreover, any new AI technology can be purposed to be harmful and/or helpful. For instance, generative adversarial networks (GANs), invented in 2014, can help hackers to guess a user’s anti-malware software, but they can also create helpful data for medical science (among other uses). 

 

Related Documents