Inadequate or skewed data in AI systems
Overreliance on AI
Lack of transparency and explainability
Resistance: Lack of public understanding about how government uses AI
Misuse or questionable use of AI, resulting in surveillance and privacy concerns
Exacerbating social exclusion and digital divides
If AI systems rely upon inadequate or skewed data, it could lead to inaccurate or adverse outcomes for some individuals or groups. With regard to civic participation and open government, this could result in inaccurate or imprecise consideration of citizen input into AI-enabled participation processes.
Moreover, deliberative processes are meant not only to harness collective intelligence and enable societal dialogue on specific policy issues, but also to enhance the knowledge of participants as well as their mutual understanding and empathy (OECD, 2021[175]). Overreliance on AI tools to improve the efficiency of deliberative processes might overlook other fundamental aspects of deliberation.
AI’s lack of transparency and explainability, especially in complex systems, can decrease trust in AI tools, and when these are used in democratic spaces, it can in turn affect trust in the participatory process and its outcomes. To address this challenge, some governments have launched algorithm registers that disclose detailed and technical information about algorithms, such as, their purpose, design, data inputs, decision-making processes, potential biases, among others (for details and examples, see Chapter 4, section on “Establishing guardrails to guide strategic and responsible AI”).
In addition, there are still many unknowns about user perceptions regarding AI in participatory processes and the influence it may have on their contributions and their willingness to participate. Experiments are being conducted to assess the reaction of participants to the introduction of AI tools as intermediaries in participatory and deliberative processes (Hadfi et al., 2021[176]; Kim et al., 2021[177]). The use of AI tools to support participants in formulating inputs and contributions should be carefully handled to avoid that these tools compromise the creativity in language and reflection of participants, focusing rather on efficiency and agreement.
Some governments have misused digital technologies for surveillance or even silence populations and digital opposition, thus damaging the civic space (OECD, 2022[152]). The use of AI systems for surveillance purposes, content censorship and improper forms of predictive policing threaten the free exercise of civil freedoms, such as the right to peaceful assembly and freedom of expression. These improper uses risk creating general mistrust among citizens towards the adoption and deployment of AI systems in government functions and public services. The OECD (2022[152]) found that most national AI strategies fail to discuss the impact of AI on the ability to freely exercise rights, although around half propose concrete oversight and redress mechanisms (OECD, 2022[152]).
Finally, many languages are insufficiently represented in AI systems, which are mainly trained in English, Spanish, and Mandarin (Peixoto, Canuto and Jordan, 2024[178]) (see Chapter 1, section on “Exacerbating digital divides” for a detailed discussion of the issue). In the context of citizen participation, this means that inputs submitted in other languages might not be processed and valued in the same way, creating new democratic imbalances (Romberg and Escher, 2024[179]). For example, to address the language divide and to preserve the Icelandic language, the government of Iceland partnered with OpenAI to train the LLM GPT-4 in Icelandic (Government of Iceland, 2023[180]). Similarly, the University of Turku (Finland) partnered in 2023 with the company SiloAI to build the Poro suit of systems, a family of multilingual open-source LLMs for all European official languages (University of Turku, 2023[181]). Similar efforts exist in for other languages, including Indigenous and endangered ones (OECD, 2023[182]).