

Provider-Directed Artificial Intelligence–Assisted Clinical Case Review for Enhanced Medical Education in the Emergency Department
Wednesday, May 20, 2026 3:15 PM to 4:50 PM · 1 hr. 35 min. (America/New_York)
L504 - L505: Level L
Innovations-SAEM
Informatics/Data Science/AI
Information
Intro/Background
Emergency providers (EPs) shape the initial trajectory of patient care but often do not find out about outcomes downstream of emergency department (ED) disposition especially among patients with undifferentiated symptoms. This limits EP calibration and practice-based learning. Manual chart review and peer feedback are labor-intensive, limited to unusual clinical scenarios or catastrophic outcomes. Artificial intelligence (AI) generated clinical summaries including outcomes post-ED disposition have potential to enhance provider learning and reduce patient harm from misdiagnosis.
Purpose/Objective
We developed a feedback pipeline that allows clinicians to easily flag cases while on shift in the ED for later follow up on clinical course, enabling individualized reflection and learning. This process, based on self-directed learning theory, allows EPs to submit specific questions for follow-up, receiving concise summaries via e-mail. We aimed to leverage AI, specifically large language models (LLMs), to automate the generation of customized and education-targeted patient summaries.
Methods
Clinicians request feedback by clicking a “Tell Me What Happens Next” button in the electronic health record, inputting case-specific queries (e.g. Were the blood cultures positive?) and selecting a follow-up interval (three days, one week, or two weeks). Summaries are generated by clinician reviewers. Concurrently, we piloted our institution’s HIPAA-compliant AI toolkit, specifically with GPT-5-mini, to assess AI summaries according to criteria including accuracy, completeness, conciseness, and helpfulness on 1-5 Likert scales.
Outcomes
Over 91 days, we received 314 feedback requests (average 3.45 requests per day) from 89 users: residents (37, 41.6%), attendings (27, 30.3%), and physician assistants (27, 30.3%). The plurality of summaries were requested at one week (123, 39.2%) with 85 (27.1%) including a free-text question. To date 26 AI summaries were assessed, demonstrating high accuracy (mean score 5.00), conciseness (4.20), completeness (4.93) and helpfulness (4.67).
Summary
Emergency physicians (EPs) often set the trajectory of care for acutely ill or injured patients but rarely find out what happens next or receive feedback on outcomes after patients leave the ED. External feedback through traditional peer review mechanisms is rare, time intensive, and commonly rendered in response to unusual clinical scenarios or catastrophic outcomes. We developed a feedback pipeline that allows EPs to flag cases while on shift in the ED for later follow up on downstream clinical outcomes. We aimed to enable and enhance physician reflection and learning on flagged cases by sending concise follow-up summaries by e-mail at desired time intervals. In addition to general summaries, EPs may submit case-specific queries (e.g. Were the blood cultures positive?). While follow-up summaries on individual cases were initially completed by expert clinician chart abstractors, artificial intelligence (AI), specifically large language models (LLM) have shown promise in prior work as tools for automating the generation of customized accurate, concise, and complete clinical summaries in multiple healthcare contexts. As of this submission, we have piloted LLM clinical summary generation with 26 summaries with our institution’s HIPPA-compliant AI toolkit, with high accuracy in these initial benchmarks. In the future, we aim to further optimize the LLM prompt, demonstrate reliable accuracy and helpfulness, and automate this process of feedback, case reflection, and learning on downstream clinical events. If successful, we seek to scale our approach across EDs and other clinical contexts.
CME
1.5
Disclosures
Access the following link to view disclosures of session presenters, presenting authors, organizers, moderators, and planners: