From Data to Dialogue: Integrating Large Language Models Into Competency-Based Clinical Competency Committee Letters

From Data to Dialogue: Integrating Large Language Models Into Competency-Based Clinical Competency Committee Letters

Wednesday, May 20, 2026 3:15 PM to 4:50 PM · 1 hr. 35 min. (America/New_York)
L504 - L505: Level L
Innovations-SAEM
Education

Information

Intro/Background
Clinical Competency Committee (CCC) letters are intended to synthesize longitudinal assessment data into meaningful, actionable feedback for learners. In practice, they are time-intensive, cognitively demanding, and variable in effectiveness, contributing to faculty burnout and inconsistent feedback quality. As assessment data grow in volume and complexity, programs need scalable, ethical approaches that preserve faculty judgment while improving clarity, consistency, and educational value.
Purpose/Objective
To describe a human-in-the-loop framework for integrating large language models (LLMs) into Clinical Competency Committee letter development. This innovation uses LLMs to synthesize multi-source assessment data and generate structured draft letters, while preserving faculty authorship, ethical oversight, and transparency. The objective is to improve the quality and consistency of feedback while reducing faculty cognitive load and burnout.
Methods
Existing CCC workflows were augmented with large-language-model–assisted synthesis of narrative evaluations, performance metrics, procedural competency, conference attendance, and related data. Structured prompts guided theme extraction and milestone alignment, producing draft letters for faculty review. Final content was determined exclusively by CCC members, ensuring human oversight and ethical integration.
Outcomes
Implementation resulted in a substantial reduction in the time faculty needed to generate CCC letters. CCC members reported improved efficiency and decreased cognitive burden, while learners noted greater clarity, personalization, and perceived usefulness of feedback. Letters demonstrated increased consistency, specificity, and alignment with competency domains, supporting more meaningful learner reflection and development.
Summary
Clinical Competency Committee (CCC) letters are a cornerstone of competency-based medical education, synthesizing longitudinal assessment data into meaningful, actionable feedback for learners. In practice, however, CCC letter development is often time-intensive, cognitively demanding, and emotionally taxing for faculty, contributing to burnout and variability in feedback quality. As the volume and complexity of assessment data increase, programs need scalable approaches that improve efficiency without sacrificing educational integrity or human judgment. This presentation describes a human-in-the-loop framework for integrating large language models (LLMs) into CCC letter development. Rather than replacing faculty authorship, LLMs are used as cognitive extenders to synthesize multi-source assessment data and generate structured draft letters that support, rather than supplant, educator expertise. Inputs include narrative clinical evaluations, procedural competency logs, conference attendance metrics, and CCC discussion notes. Data are de-identified and organized by competency domain prior to LLM processing. Structured prompts guide the LLM to identify recurring themes, highlight strengths and areas for growth, and align observations with ACGME milestones. Draft letters are generated using a standardized structure designed to promote clarity, consistency, and learner-centered feedback. Faculty reviewers maintain full editorial control, refining language, adding contextual nuance, and ensuring accuracy prior to finalization. No autonomous decisions are made by the LLM, and all outputs are subject to mandatory human review. Learners are informed through a standardized disclosure statement that AI was used to assist with synthesis, reinforcing transparency and ethical use. Early implementation of this framework led to a substantial reduction in the time faculty spent producing CCC letters, alleviating a significant source of cognitive and emotional burden for CCC members. Faculty reported improved workflow efficiency and greater satisfaction with the consistency and organization of the final product. Learners described the letters as more personalized, specific, and reflective of their clinical performance, supporting clearer understanding of expectations and more meaningful self-reflection. This innovation addresses two critical challenges in graduate medical education: responsible integration of artificial intelligence and prevention of faculty burnout. By reframing LLMs as infrastructure for assessment synthesis rather than content generators, this model demonstrates how AI can enhance feedback quality while preserving the human voice central to medical education. The approach is scalable and adaptable across training programs, and it is grounded in ethical guardrails that prioritize transparency, oversight, and learner trust. Attendees will leave with a practical framework for integrating LLMs into CCC workflows, including example prompts, workflow considerations, and strategies for maintaining faculty ownership and psychological safety. This session offers a replicable model for programs seeking to leverage AI to improve feedback quality, support educators, and promote sustainable educational practices in emergency medicine.
CME
1.5

Disclosures

Access the following link to view disclosures of session presenters, presenting authors, organizers, moderators, and planners:

Log in

See all the content and easy-to-use features by logging in or registering!