One of the most significant challenge for governance in the field of AI comes from the fact that critical AI capabilities in the private sector – concentrated in a small number of tech-firms – far exceed those in the public sector. This asymmetry is likely to be one of the most important issues in the long-term development of socially beneficial AI systems. Major imbalance in public and private resources exist, particularly, in talent and investment. For example. AI-talent is scarce globally. The flow of leading AI researchers from universities and public research bodies to private companies is a central feature of current developments. An obvious reason for this is the gap between private and public sector remuneration: DeepMind, a subsidiary of Alphabet, is reported to have a payroll of around £568,000 per employee (Warrington, 2019).
Some of the implications of the divide between public and private capabilities are just coming into view. Others will arise as AI systems evolve. Two key issues, about which more needs to be known, are:
First, and most importantly, the design and implementation of good policy could be impaired if governments know too little – or little relative to firms – about AI technologies (how they are developing, their behavioural characteristics, such as safety, and how they might be applied). Currently, according to a recent account based on hundreds of interviews with senior executives of the largest corporate investors in AI in the United States, there is no ongoing strategic dialogue between these firms and the public sector, outside of communication around procurement contracts (Webb, 2019). Some of the AI research performed in the largest tech companies does enter the public realm, especially because of joint business-academic affiliations. But much else does not, for instance around systems integration and cryptography.
A recent example of the consequences of an expertise asymmetry comes from the financial crash of 2008. Prior to the crash, regulatory bodies generally understood much less than financial institutions about the complex financial products which enabled the crash. They were ill-prepared to forewarn of the dangers or forestall the gathering crisis. Some policymakers already doubt the ability of regulators to keep abreast of key developments in the private sector concerning existing governance issues related to privacy and data (Lohr, 2019). AI could add to the complexity of addressing such challenges - across numerous fields of policy.
Second, will governments experience growing dependence on tech companies? For example, one among many forms of possible dependence concerns public procurement. Governments are notoriously poor at procuring large and costly IT systems, for example. Compared with the mishaps in private sector procurement, problems in public procurement of such systems have somewhat different causes, but typically stem from insufficient understanding of technology (Krigsman, 2008).
Evidently, the governance of AI involves fundamental economic, social, political and ethical outcomes. But designing and effecting good governance will not be straightforward.