While some of the competition concerns raised by AI will resemble traditional theories of harm and will probably be solvable with the existing competition toolbox, the rise of AI systems introduces novel challenges for competition enforcement, particularly in attributing conduct and establishing intent. Unlike traditional forms of co-ordination or exclusion, algorithmic and AI systems may produce anti-competitive outcomes autonomously, without direct human instruction or explicit agreement. Autonomous optimisation challenges traditional enforcement paradigms, requiring a shift in how responsibility is attributed.
AI systems can learn profit-maximising or exclusionary strategies through reinforcement learning or repeated interaction, even in the absence of deliberate co-ordination. This undermines conventional legal notions of intent and agreement. As the Autorité de la concurrence and Bundeskartellamt joint report on algorithms (2019[104]) noted, proving intent may be less relevant when competitive harm arises regardless of human intention. The report acknowledges the difficulty of attributing responsibility in cases where algorithms operate as opaque “black boxes,” noting that “the complexity and opacity of algorithms may make it difficult to determine whether a certain conduct is anticompetitive and whether it can be attributed to a specific undertaking” (p. 24).
Compounding these challenges is the issue of explainability and accountability. Agentic AI further complicates the ‘black box’ problematic by continually updating parameters and adapting behaviour (Belcak et al., 2025[116]; Desai and Riedl, 2025[89]). This could potentially have procedural consequences: a lack of explainability might impede effective enforcement. Some authors suggest that such opacity may justify ex-post audit rights or mandatory disclosure of documentation that spans all the elements of the AI system, including, but not limited to, the models, its training data and algorithms, rather than new substantive prohibitions (Hagiu and Wright, 2025[65]). Another possibility would be technical co-operation between authorities and developers to facilitate interpretability (OECD, 2025[103]). Opacity does not create new offences, but it can hamper the attribution of accountability. Enforced transparency and auditability may become essential to sustaining competition law enforcement in AI-driven markets (Hagiu and Wright, 2025[65]).
The OECD’s AI Principles underscore that AI systems should be designed so that (i) their logic, data and decision-making can be suitably explained (explainability) and (ii) affected parties have meaningful access to challenge, appeal or correct outcomes (contestability). The first ensures transparency; the second ensures that transparency is actionable. Without contestability, explanations risk being purely descriptive and not leading to meaningful oversight, accountability or market contestation (OECD.AI, 2025[117]).
From a competition-policy perspective, ensuring contestability means enabling suppliers, users, and third-party challengers to verify, question or override embedded algorithmic decisions that may raise barrier, lock-in or exclusion risks. Explainability alone is a necessary but not sufficient condition for contestability.
Hagiu and Wright (2025[65]) argue that competition authorities must adapt their frameworks to account for autonomous optimisation. The OECD has similarly recommended evaluating foreseeability and controllability – whether a firm could reasonably predict or prevent its algorithm’s exclusionary or collusive behaviour (OECD, 2023[90]; 2025[103]). Authorities may need to assess design choices, governance processes, and oversight mechanisms, rather than relying solely on evidence of explicit intent. The CMA’s 2021 report on algorithmic harms discusses how ineffective oversight of algorithmic systems can lead to consumer and competition harm. It highlights the need for regulators to develop audit tools and accountability frameworks, which could imply that firms may be held responsible for negligent deployment or lack of safeguards (CMA, 2021[118]).
The difficulties are likely to be further exacerbated with the rise of agentic AI, where autonomous agents may act on behalf of firms or individuals, or even independently, including in ways that resemble market co-ordination. This complicates attribution of harmful results, such as consumer welfare losses or exclusionary conduct. While agentic AI may foster pro-competitive dynamics in specific markets, it also raises forward-looking concerns around accountability, transparency, and the governance of interactions among autonomous systems – dynamics that are still not fully understood.
Agentic AI systems can raise complex liability questions, particularly when they act with a degree of autonomy that blurs the line between tool and actor. As these systems increasingly operate on behalf of firms or individuals – or even independently – the challenge lies in determining who is accountable for their outputs. Scholars highlight the need for governance mechanisms to mitigate risks linked to information asymmetry and discretion when such systems act on behalf of firms or individuals, or on their own volition (Kolt, 2024[119]; Desai and Riedl, 2025[89]). When agentic systems generate exclusionary or exploitative outcomes, the lack of direct human control complicates attribution, especially under legal frameworks that rely on intent or direct causality. This calls for a reassessment of liability standards, potentially shifting towards responsibility for design choices, oversight, and risk mitigation rather than traditional notions of intent.
Policy foresight exercises underline the potential systemic impact of agentic AI, but also its current limitations. The OECD’s AI Capability Indicators report notes, “Agentic systems typically perform below level 3, indicating significant limitations on AI’s ability to self-monitor and adaptively regulate its own reasoning” (OECD, 2025[120]). Nonetheless, the growing autonomy of agents could begin to extend their influence beyond narrow applications, reshaping online ecosystems and creating new intermediation layers between firms and consumers (Toner et al., 2024[121]). This suggests that, while agentic AI may foster contestability in specific markets, or even replace human agency in intermediation functions (such as travel agents), it also raises forward-looking concerns around accountability, transparency, and the governance of interactions among autonomous systems, which are still not fully understood.