The governance of AI is complicated by the number of institutions and policy themes it affects
Compared to most other technologies, more trade-offs have to be managed in terms of resource allocation and goal setting, more constituencies have to be satisfied, and more potential instances of policy incoherence have to be foreseen and averted.
AI is considered a strategic technology, one in which major powers are actively aiming to attain competitive advantage
A technology race could lead countries most influential in developing AI to sacrifice governance aims in pursuit of leadership positions. Changes brought by AI could dwarf those of previous technological revolutions, from raising labour productivity and living standards, improving services in health and education, to accelerating science – among the positive examples – to profoundly disrupting labour markets.
AI is constantly evolving, with a timing and direction that are largely unpredictable
New developments in AI technologies could directly affect the priorities or need for public policy. For instance: over the past two years a new generation of AI-enabled malware has come about. One example of this – DeepLocker – can hide in benign carrier applications, such as video conference software, avoid detection by most antivirus and malware softwares, and target individual computer users through facial recognition, geolocation, voice recognition, and other parameters (Stoecklin, 2018).
The public’s relationship to AI differs from that of any other technology
Experiments show that subjects unconsciously anthropomorphise AI and robots (Fussell et al, 2008), opening ground for AI-related anxieties and hyperbole.
There is no consensus on how to operationalise some of the main governance objectives set out in the recent proliferation of public and private AI recommendations and principles (of which there are now many tens). For example:
Related Documents