The magic of pilot projects in formative testing


The magic of pilot projects in formative testing

At Swiss Connect Academy, we value learnings that result from pilot projects. They offer insights into unforeseen opportunities, contribute to new standards and deepen working relationships. The AI Development team collaborated with Taskbase on two pilot projects between 2021 and 2022. Below we invite you to read about their objectives and key takeaways – or as we like to call them experiences gained!

Pilot Project #1: Provide students with questions to train self-awareness

We embarked on providing participants with 75 new questions for the first module of our Leadership course: Self-Awareness. A module which enables students to critically question themselves, recognize how their attitudes and values influence their managerial behaviours and decisions, and identify areas of individual professional development.

We achieved the objectives of our first pilot by:

1. Assigning 3 question difficulty levels to all questions

This means that 25% of all questions in the module were written according to Bloom’s Taxonomy on the “Remember” (Level 1); 35% reach the “Understand” (Level 2); and lastly, 45% of all questions require the learners to “Apply” (Level 3). This allowed us to ensure an engaging range of high-quality questions that also focused on students’ application of learned skills. Additionally, each question level difficulty is directly related to the points per question ranging from 10 points for Level 1 questions to 30 points per Level 3 ones.

2. Varying 4 question types,with sample solutions,throughout the module

To promote learners’ multi-logical thinking skills, we diversified the question types between

  1. Multiple choice: single solution or multiple solutions
  2. Cloze: drop down solutions or keyword answers
  3. Matching: assign a term to a definition or order a process
  4. Open: keyword or full-sentence answer

Lastly, after completing a question, participants could review a sample solution to support their understanding on the learning objective.

3. Encouraging participants to repeat all etests and the summative evaluation

We created a safe space for participants to review and practice new skills as opposed to “testing” their knowledge.

Pilot Project #2: Provideimmediate formative feedback for all question types

Once the first group of students successfully completed the Self-Awareness module, we were eager to set up the feedback for the second group.

We reached the objectives of our second pilot by:

1. Creating feedback based on hundreds of responses

The feedback addressed 3 categories of feedback:

  1. correct: confirmed the students’ answers fully met the learning objective
  2. incorrect: identified a misunderstanding in participants’ answers and encouraged them to review a specific part of the learning content wherein the answer is located 
  3. semi-correct: clarified a concept was correctly mentioned but another key element of students’ answers was missing

The learning value for students is especially relevant in Cloze and Open questions where the AI system identified appropriate feedback based on students’ freely written text answers.

2. Linking 3 feedback types to Self-Awareness learning objectivesto calculate mastery

By linking all feedback to learning objectives, the algorithm was able to calculate students’ mastery levels per learning objective. As participants’ progress through the module, they practice different competencies. In the summative evaluation, the algorithm challenges learners based on their mastery levels per competency to determine if they have reached a satisfactory mastery of each. The learning value for students includes:

• ensuring all learning objectives are covered

• receiving the right feedback based on responses

• posing questions that challenge participants’ abilities to the right degree without overwhelming them with overly difficult ones they cannot answer based on their present mastery level

Experiences gained from both pilots

Undoubtedly, developing an AI system that accurately responds to participants’ answers requires a considerable amount of initial human interaction. Next to this, we gained valuable experiences in the following:

1. choosing the best question type per question content

2. writing engaging questions based on 3 question difficulty levels throughout the module

3. writing 3 types of constructive and meaningful feedback and sample solutions

4. basing feedback writing on anticipated and real learners’ answers

5. linking each question and feedback to a competency

6. testing and adjusting the AI system with existing and anticipated students’ responses to provide a pleasant learning experience

The learnings from our pilot projects strongly influenced the evidence-based processes we use to write new questions and feedback. Most importantly, we are well underway to ensuring all modules at Swiss Connect Academy have high-quality questions with AI feedback capabilities for all question types.