As outlined in the section on taking a horizontal view, a clear pattern has emerged that policymaking on immersive technologies has thus far placed a strong emphasis on fostering innovation, developing necessary infrastructure and amplifying the potential benefits of these technologies. Policies often hinge on strengthening research ecosystems, supporting SMEs and promoting adoption in specific sectors.
While these policies can accelerate further technological development, they do little to address the distinct and sometimes heightened risks and complexities introduced to societies by immersive technologies (OECD, 2024[8]). These potential risks can stem from large volumes of data collection, potential psychological and physical impacts, and complications around cross-border data governance, among other issues (OECD, 2025[1]). In terms of policymaking specifically targeting immersive technologies, policy principles have directed substantial focus to questions of ethics, safety and digital security, and even within that small set of instruments the principles have focused exclusively on virtual worlds; otherwise, existing policies on digital technologies more broadly cover immersive technologies within their scope, but potential gaps require further assessment.
Given the many distinctive and even unique characteristics of immersive technologies, policymakers may seek to consider policymaking aimed at potential risk mitigation. For example, VR headsets can collect granular biometric data, including facial expressions, eye movements and other nonverbal cues. AR and MR devices have the potential to record bystanders and process data about them without their explicit consent. Such capabilities could create tension with existing data protection frameworks and raise questions about user privacy, digital security and potential misuse—particularly in sensitive areas such as healthcare or children’s engagement with virtual environments.
Moreover, the highly immersive nature of these technologies can amplify traditional online risks. Challenges around manipulated and misleading content, hate speech or manipulative advertising could become more acute in environments where users may not perceive digital content as being digital, where avatars can obscure identities and where AI-driven elements can non-transparently shape user experiences. The risks of motion sickness or psychological effects – such as addiction or social isolation – also underline the need for more robust research, guidelines and technical safeguards (OECD, 2024[8]).