SAGE header
/SAGE

SAGE

Smart education logo

Spoken Assessments Guided by Enhanced technologies

A breakthrough for spoken e-assessment

Digital technologies have transformed the way we assess knowledge and skills, enabling everything from online exams and job screenings to adaptive learning platforms. Such E-assessments offer clear benefits, including flexible timing and location, streamlined processes, and tailored feedback.

Yet, assessments of spoken language remain a blind spot. Most platforms still rely heavily on written formats, while oral evaluations are conducted manually by human examiners. This approach is slow, expensive, and prone to bias, often resulting in delayed or inconsistent feedback. Moreover, as generative AI raises new concerns about the reliability of written assessments, the demand for scalable, trustworthy spoken evaluations is more urgent than ever.

The SAGE project meets this challenge head-on. It introduces AI-powered solutions for spoken assessment in Dutch, French, and English, ensuring efficiency, fairness, and trustworthiness across educational and professional contexts.

Technological challenges for SAGE

Speech varies widely depending on factors like accent, dialect, emotional state, stress, and speech disorders. This variability makes it difficult for current technologies to produce reliable, consistent results at scale. For instance, speech recognition systems often struggle with specialized vocabulary or different pronunciations, which can lead to errors. Similarly, tools designed to assess written language cannot be directly applied to spoken language, because spoken and written forms differ significantly in structure and delivery.

The SAGE project addresses these technological challenges for multiple aspects of speech assessment:

  • Speech analysis: SAGE strengthens the ability to identify individual speakers, measure fluency, and detect stress in oral responses. It aims to make assessments more inclusive by reducing bias against non-native accents and atypical speech patterns.
  • Speech transcription: SAGE strikes a balance between leveraging powerful existing models and ensuring accurate, verbatim transcriptions, especially when handling diverse speech inputs.
  • Language analysis: SAGE develops automated rubrics to deliver both meaningful insights for examiners and targeted feedback for students on individual assessments, highlighting elements such as question difficulty, summary of key answer components, language delivery (fluency and competency), and content accuracy.
  • Recommendation: SAGE integrates explainable machine learning technology – meta-learning and active learning approaches – with speech and language technologies to accelerate the evaluation process while maintaining consistency in assessment outcomes.
  • Validity and fairness: Ensuring that these technologies are valid and fair is central to the mission of SAGE. The project provides evidence-based support to human assessors, reinforcing trust and transparency in the assessment process.

Demonstrating results

The project results will be integrated into a demonstrator designed for three key use cases:

  • Evaluations in high-stakes settings, namely oral exams in the biomedical domain (in English) and involving dialogue: an examiner questions a student.
  • Summative spoken tests in the domain of STEM (in Dutch), where a learner delivers a monologue on a topic.
  • Formative evaluation in apps for spoken language practice (in French), where a learner interacts with a chatbot.

The SAGE project addresses the growing need for spoken e-assessments in education and in professional training across sectors such as healthcare, manufacturing, and utilities. By advancing AI-driven assessment technologies and validating them in real-world applications, SAGE strengthens both the scalability and the credibility of spoken assessments. Its results will support educators, language trainers, and assessors, while creating significant market opportunities for the participating companies.

“Ultimately, SAGE will make speaking assessments fairer, more efficient, and more transparent, using AI technologies that empower the human assessor with explainable insights.”

SAGE

SAGE develops and evaluates accurate, trustworthy, and scalable AI technologies for grading spoken assessments in English, Dutch, and French, offering valuable support to human assessors.

SAGE is an imec.icon research project funded by imec and Agentschap Innoveren & Ondernemen (VLAIO).

The project started on 01.10.2025 and is set to run until 30.9.2027.

Project information

Industry

  • BLCC
  • Linguineo
  • Sensotec
  • Televic Education

Research

  • EdTech Station
  • imec – IDLab Data Science Lab – UGent
  • imec – ITEC – KULeuven

Contact

  • Project lead: Pieter Pangat, Televic Education
  • Research lead: Anaïs Tack, imec – ITEC – KULeuven
  • Proposal manager: Frederik Cornillie, imec – ITEC – KULeuven
  • Innovation manager: Eric Van der Hulst, imec