Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

AI Item Set Generator

Ability to create customised, high-quality exam items aligned with educational objectives and writer-provided content. Based on Bloom's Taxonomy levels, this capability enables organisations to try, test and utilise assistive item writing technology in a safe and familiar environment.

  • Use cases include: Test Preparation content, Mock content, Research, Special Interest, or even used within your existing item writing process to provide assistance to your teams (human-in-the-loop recommended).
  • Status: Available in risr/assess version 8.11 and above.
  • Ideas for the future: Additional item types and creating Quiz content directly from /advance ePortfolio module based on learning outcomes or area of competence.

AI Item Blueprinting & Tagging

Employing AI to automate the creation of assessment blueprints and tagging of items. Automating the blueprint creation will assist in mapping the scope of the test, including content areas, learning objectives, question formats, and the distribution of difficulty levels across the assessment.
Complementing the automated blueprinting, our AI tagging feature enables each test item/question to be automatically tagged with metadata that links it to specific elements of the blueprint, such as content domain, learning objective, difficulty level, cognitive level, and question format. This AI tagging process vastly improves efficiency and ensures that every question is purposefully selected and placed, contributing to the overall validity and reliability of the assessment. It also enables detailed analysis of test items and overall test structure, making it easier to refine and improve assessments over time.

  • Status: Conceptual design already in work
  • Use cases include: Test Preparation content, Mock content, Research, Special Interest or even used within your existing item writing process to provide assistance to your teams (human-in-the-loop recommended).

AI Simulated/Standardised Patients

Transforming Oral/Viva/Skills/clinical assessments by leveraging AI to create realistic and scalable standardised patient simulations, our AI Simulated Patient technology accurately simulates patient interactions for clinical assessments based on case notes and actor scripts provided.
This innovative approach can dramatically reduce the logistical complexity and expense associated with traditional actor-based simulations but also introduces an unprecedented level of flexibility and scalability to Oral/Viva/clinical skills training - particularly in Test Preparation and research scenarios. This works on a speech in / speech out basis to simulate a typical interaction.

  • Status: Conceptual Demo is available for initial feedback
  • Use cases include: Test Preparation for Oral/Viva/Skills/Clinical, Research, Special Interest and future live assessment settings.
  • Ideas for the future: Fully interactive video-based AI avatars. This next phase of the AI Standardised Patient technology promises to further enhance the realism and immersion of Oral/Viva/Skills/clinical assessments, offering an additional training possibilities focussed on improving patient interactions, diagnosis, history taking etc.

AI Marker/Grader module

Leveraging AI to mark short answer questions (SAQ) and long-form questions applying efficiency and accuracy (linked to blueprinting, tagging and standard setting) to the process.
Aiming to aid the speed and precision in the marking process at scale, significantly reducing the time educators spend and providing a mechanism for human-in-the-loop QA rather than the direct overhead of marking.
The ability to customise the AI to accommodate various assessment types and subjects, ensuring equitable evaluation standards alongside providing contextual understanding by referencing relevant source materials, the module can ensure that marks are awarded with a deep understanding of subject matter.

  • Status: Conceptual Demo is available for initial feedback
  • Use cases include: Test Preparation for immediate results/scoring/grading of written assessments, Research purposes against human marking, Special Interest and future live assessment settings. Automated marking for Test Preparation or live assisted Oral/Viva/Skills/clinical assessments - this is based on either recordings that are transcribed using AI and or real-time capture.
  • Ideas for the future: Extension of marking assistance capability for different question/assessment types