AWS Partner Network (APN) Blog

Transforming Teacher Development with AI-Driven Coaching Feedback

By Mohammed Reda, Solutions Architect – AWS
By Adrian Hanna, Account Manager UK EdTech – AWS
By Simon de Timary, Head of Data & AI – BJSS
By Lauren Fridlington, Design & Futures Thinking Consultant – SPARCK

BJSS

Artificial Intelligence (AI) is making significant advancements across various fields, and one of its most promising applications is in education. AI-driven coaching feedback offers a potential solution to the ongoing challenges in providing high-quality professional development for teachers, particularly in the area of coaching. According to a study by the Institute of Education Sciences, instructional coaching led to measurable improvements in teacher practices and student outcomes. Teachers who received five cycles of coaching showed significant gains in their classroom management scores, increasing from 4.6 to 6.3 points on a 7-point scale.

Despite these benefits, schools still face several hurdles in implementing effective coaching programs:

  • Time constraints that limit comprehensive professional development for coaches.
  • Lack of personalized support, guidance, or feedback for coaches.
  • Challenges for schools and Multi-Academy Trusts (MATs) in providing tailored support within existing structures.

Steplab is a customizable professional development platform for schools. The Steplab platform provides step-by-step content for teachers to develop and improve their teaching practice alongside their assigned (in-real-life) coach. To address the above challenges, Steplab, in collaboration with AWS and BJSS, an AWS partner, embarked on developing an AI-powered coaching tool. The coaching tool provides personalized feedback to coaches, to enhance the quality of conversations between coach and teacher to improve teaching and learning outcomes. In this post, we will discuss the process of an exploratory solution and the outcomes achieved.

Background

Following an exploratory workshop to evaluate potential generative AI use cases, Steplab, BJSS, and AWS collaborated on the development of a pilot solution. The goal was to test the feasibility and desirability of an AI coaching tool that could provide individualized feedback for coaches on their coaching practice. Figure 1 below offers a high-level overview of the pilot, illustrating how AI is used to analyze coaching session transcripts. The goal is to extract feedback aligned with Steplab’s coaching model, identifying areas of praise and opportunities for improvement in order to enhance the focus and productivity of coaching conversations.

Figure 1: Proof of Concept high-level overview. The diagram illustrates a process for improving coaching sessions. On the left, a box labeled 'Mentoring Session Coach & Teacher' leads to a central icon symbolizing AI analysis, with the words 'Transcribe' and 'Analyze' above arrows pointing from left to right. On the right, another box labeled 'Feedback to Coach: One Area of Praise & One Area for Improvement' receives input from the analysis. At the bottom, an arrow labeled 'Enhance the Quality of the Coaching Conversation' points back from the analysis icon to the initial mentoring session.

Figure 1. Solution High-Level Overview

Pilot Solution Development

Phase 1: Model Selection

The concept was to explore whether transcripts from coaching sessions between teachers and their coaches could be analyzed by an AI model. The goal was to automatically assign coaching steps from Steplab’s coaching model to individual sentences within the transcript, as shown in Figure 2.

Figure 2: Assigning coaching steps to single sentences from a coaching transcript. The diagram shows a process starting with a box on the left containing the question, 'I wonder, how do you think introducing that methodology contributed to their development?' An arrow points to a central icon symbolizing AI analysis, followed by another arrow leading to a box on the right labeled 'Coaching Step ID.' This illustrates how a coaching transcript sentence is processed and assigned a relevant coaching step.

Figure 2. Assigning coaching steps to single sentences from a coaching transcript

BJSS fine-tuned a Bidirectional Encoder Representations from Transformers (BERT) based model using SetFit, an efficient and prompt-free framework for few-shot fine-tuning of Sentence Transformers, to recognize and classify different steps within the coaching conversation. However, the model struggled to generalize effectively on unseen data, delivering lower performance than anticipated.

Due to these limitations, BJSS turned to Amazon Bedrock, utilizing Anthropic’s Claude 3.5 Sonnet Large Language Model (LLM). Claude 3.5 Sonnet showed stronger performance over the initial BERT-based approach. The Claude 3.5 Sonnet announcement page highlights the model’s strong performance across various evaluation tasks compared to other leading models, making it an ideal fit for the coaching transcript classification challenge.

Phase 2: Designing Specialized Analysis System

BJSS designed a parallel analysis system for each of Steplab’s coaching steps to address the challenges in processing multiple steps within a single prompt. To avoid the “lost in the middle” problem—where the LLM struggles to focus on multiple points when key information is placed in the middle of a long prompt—the team opted to process one step per prompt. This ensured the model had enough context to recognize each coaching step within the transcript accurately.

BJSS used prompts containing specific instructions and examples (few-shot prompting) to identify coaching steps, allowing the model to generalize well with just a few examples. As shown in Figure 3, the system evaluates each sentence from the transcript against specific coaching steps, helping to identify which step it covers. Additionally, it integrates data from previous coaching sessions, allowing AI to track progress by comparing feedback from prior sessions.

Figure 3: Proof of Concept parallel analysis system. The diagram shows a preprocessing stage leading to 12 parallel models (Step 1 Model through Step 12 Model) on the left. These models feed into a 'Step selection' process that also receives 'Data from previous coaching sessions' as input. The step selection process sends output to both a 'Positive feedback Model' and a 'Negative feedback Model.' The results are then stored as 'Data for future use' at the bottom right.

Figure 3. Solution Parallel Analysis System

Phase 3: Iterative Feedback Generation

BJSS developed prompts to generate feedback based on the identified coaching steps. By continuously refining the prompts, BJSS ensured that the model better understood the nuances of each coaching step, which led to more accurate and relevant feedback.

Each iteration allowed the team to tweak the prompt structure, length, and phrasing, which helped prevent common issues like model hallucination. After multiple iterations, the prompts were refined to ensure clarity and precision of feedback generation.

Phase 4: Implementation of the Final Solution

The solution utilized a parallel analysis system using Anthropic’s Claude 3.5 Sonnet LLM on Amazon Bedrock to locate where certain coaching steps had been covered. This data was prioritized according to Steplab’s model, providing coaches with one area of praise and one area for improvement deemed most valuable in enhancing the quality of the coaching conversations. Coaches also received links to Steplab’s platform resources for further learning.

The pilot solution successfully demonstrated that a parallel analysis system using a LLM could extract feedback from coaching session transcripts, provide coaches with constructive feedback as shown in figure 4 below.

A screenshot from the Proof of Concept (POC) system providing feedback to a coach. The interface highlights two key areas: one area of praise and one area requiring improvement. The feedback is generated based on the analysis of a coaching session, offering actionable insights to improve the coach’s effectiveness.

Figure 4. Example Coaching Feedback

An early-stage exploration was conducted with 5 schools and 7 former teachers, now Steplab employees, to better understand the context, problem space, and initial reactions to the proposed solution. Feedback from these sessions indicated that the ability to provide specific, actionable feedback—both areas of praise and areas for improvement—was highly valued. This revealed that such a solution would likely be well received by schools, MATs, teachers, and coaches alike. Additionally, the pilot solution was tested using transcripts from one hundred schools, further validating its potential effectiveness in real-world educational settings.

Solution Technical Details

Figure 5 illustrates the solution architecture, showing the flow from transcript upload through analysis to feedback retrieval using AWS services.

Figure 5. Solution Architecture

The solution follows these steps:

Upload and Analysis Process

  1. After successfully authenticating using the login page that uses Amazon Cognito, the Coach initiates the process by obtaining a pre-signed URL with the help of AWS AppSync and the getPresignedURL AWS Lambda.
  2. Using the pre-signed URL, the Coach securely uploads a transcript to the designated Amazon Simple Storage Service (Amazon S3) Transcripts Bucket.
  3. The upload triggers an event, sending a message to Amazon Simple Queue Service (Amazon SQS).
  4. The transcriptFeedbackProcessor Lambda function is invoked.
  5. The transcriptFeedbackProcessor function calls Amazon Bedrock to analyze the transcript content.
  6. Once analyzed, the coaching feedback is stored in Amazon DynamoDB.

Retrieval Process

  1. Transcript List Retrieval:
    • 7a. Coaches request to retrieve a list of their stored transcripts through AWS AppSync.
    • 7b. System queries DynamoDB to fetch the transcript list and returns it to the coach.
  2. Specific Transcript Feedback Retrieval:
    • 8a. Coaches request results for a specific transcript.
    • 8b. System fetches the specific coaching feedback from DynamoDB and presents it to the Coach.

Future Plans

Steplab has plans to expand its testing to at least 50 additional schools, with the goal of full-scale implementation by 2025. The future goal is to extend the AI-powered coaching tool to provide direct support for Early Career Teachers (ECTs), complementing the coaching support from human mentors, and providing a teacher rehearsal mode. This rehearsal mode would allow ECTs to practice their teaching skills by interacting with the AI simulating a student, allowing them to rehearse lessons and classroom management strategies.

“Steplab recognises the complexity of teaching and the need for feedback for teachers to be delivered by an expert coach. We believe the nuance inherent in teaching requires human to human feedback. This is why we have looked to harness the power of AI to develop expert coaches, rather than using AI to give feedback directly to teachers. This scalable model for training expert coaches has the potential to provide 1000s of teachers with high quality feedback by improving the quality of coaching conversations.”

– Claire Hill, Executive Director – Steplab

Conclusion

Steplab’s innovative use of AI-powered feedback helps coaches to improve coaching conversations. The solution that has been developed, in collaboration with AWS and BJSS, demonstrates the potential to streamline teacher professional development. By using generative AI services on AWS, Steplab is addressing critical challenges in education, providing personalized, scalable support for both coaches and ECTs.

Looking to explore AI use cases for your organization? BJSS’s expertise in developing and implementing AI solutions on AWS can help turn your ideas into reality. Connect with BJSS today to start your AI journey and unlock new possibilities for your organization.

BJSS-APN-Blog-CTA-2023


BJSS – AWS Partner Spotlight

BJSS is an AWS Advanced Tier Services Partner and leading delivery-focused IT and business consultancy providing a wide a range of services spanning the full software delivery lifecycle.

Contact BJSS | Partner Overview | AWS Marketplace