After the introductory session, the second part of the event focused on the measurement of the impact of digital technologies in education. The complexity of this issue was a key point of discussion during the event, as participants highlighted that assessing the impact of digital technologies in education is a complex and ongoing process. As stated by several authors (Golden, 2020[6]; Hallie Preskill et al., 2013[7]), policy evaluation in education entails understanding complex contexts and relationships within education systems, as well as the specific reforms being implemented. Part of this involves the use of indicator-based frameworks, which allow policymakers to assess the availability, use, and outcomes of, in this case, digital technologies in schools. These frameworks track metrics such as ICT infrastructure (e.g., device-to-student ratios and internet access), usage patterns in classrooms, and learning outcomes like academic performance and digital skills.
For instance, international testing frameworks like PISA offer data and overviews of students’ ability to solve complex problems, think critically, and communicate effectively. The same applies for the testing framework of ICILS (International Computer and Information Literacy Study) in relation to students’ computer and information literacy (CIL) and computational thinking (CT)1. However, while these indicators provide valuable insights into usage patterns and correlations, they do not establish causality or fully capture the nuanced effects of digital technologies on educational outcomes (see Box 1). Because of this, some authors (Golden, 2020[6]; Hallie Preskill et al., 2013[7]; Daniel L. Stufflebeam, 2001[8]) stress the need for more comprehensive evaluation strategies that integrate both quantitative and qualitative methods to fully understand the complexities of educational reforms.
Qualitative and mixed methods assessments – such as interviews or focus groups with teachers, students, and parents – provide crucial context for understanding how digital technologies are perceived and used in practice. Surveys and classroom observations, for example, can complement self-assessment data by offering additional insights into how digital technologies are used and how their use correlates with student engagement and teacher practices (Joint Research Centre, 2023[9]). These methods allow policymakers to consider stakeholder needs, identify implementation barriers, and, ultimately, find ways to address them. Though these data sources can help policymakers track trends and correlations, to assess actual impact and causality mechanisms, additional research approaches – such as experimental designs (e.g., Randomised Control Trials) and longitudinal studies – are required.
Randomised Control Trials (RCTs) allow policymakers to establish causal links between technology use and educational improvements by comparing outcomes between treatment and control groups (Cukurova and Luckin, 2019[10]). Therefore, this type of study offers robust evidence to inform scalable policy decisions and identify effective practices across diverse educational contexts (Golden, 2020[6]). Longitudinal studies allow policymakers to track the long-term effects of digital technologies on students’ learning trajectories, career readiness, and overall educational equity. This methodology allows to assess the sustainability of reforms as well as their impact over time (Golden, 2020[6]).