This study introduces the Collective Intelligence Model for Evaluation (CIME), a framework designed to address the limitations inherent in traditional assessment methods. CIME integrates psychometric modelling, large language models (LLMs), and human expertise to effectively assess complex and granular competencies, including both cognitive operations and non-cognitive skills, across diverse assessment contexts. By leveraging AI-driven analysis through a continually updated knowledge database, complemented by expert oversight, CIME enhances the precision and reliability of assessments. The model establishes rigorous and continuously updated scoring criteria to ensure internal consistency, eliminate bias, and maintain international comparability. In addition, CIME incorporates advanced diagnostic methodologies, utilising psychometric scaling and Retrieval-Augmented Generation (RAG) to provide comprehensive diagnostics. This ensures that all outputs are anchored in accurate and verified sources. The study also highlights CIME’s ability to deliver personalised feedback, tailoring diagnostics to individual learners’ competencies and backgrounds. This enables personalised learning pathways and supports informed decision-making in education. Furthermore, CIME offers a scalable and efficient solution for modern educational assessment and policymaking by continuously refining its knowledge base and securely integrating with external data sources.
Share
Facebook
Twitter
LinkedIn
Abstract
In the same series
-
3 March 202661 Pages
-
13 February 202657 Pages
-
1 December 202586 Pages
-
26 November 202589 Pages
-
Working paper
Emerging implications and a case study on writing
21 November 202549 Pages -
Working paper24 October 202532 Pages
Related publications
-
Working paper24 October 202532 Pages
-
Working paper
Results from an observational pilot study
16 July 202532 Pages -
16 July 202529 Pages
-
Working paper
Evidence from the Policy Survey on School Education in the Digital Age
20 March 202598 Pages