Using ChatGPT as a Simulated Patient to Enhance Communication Skills Training in Emergency Medicine

Using ChatGPT as a Simulated Patient to Enhance Communication Skills Training in Emergency Medicine

Tuesday, May 19, 2026 5:12 PM to 5:20 PM · 8 min. (America/New_York)
International B: Level I
Abstracts
Simulation

Information

Abstract Number
407
Background and Objectives
Communication skills are fundamental to high-quality emergency care, yet emergency medicine (EM) physicians have limited opportunities to practice counseling with structured feedback. Although standardized patient simulation can address this gap, it is often resource intensive and variable. Emerging large language models such as ChatGPT offer a cost-effective alternative that supports deliberate practice and standardized feedback. This study assessed the feasibility, perceived educational value, and realism of a ChatGPT-based simulated patient for communication skills training in academic emergency medicine.
Methods
We conducted a prospective cohort feasibility study evaluating a CustomGPT designed to simulate counseling-focused emergency department patient encounters. Three predefined scenarios were used: missed abortion managed expectantly, threatened abortion with viable intrauterine pregnancy, and non-pregnant acute abnormal uterine bleeding. The simulator used standardized instructions, fixed case prompts, and an automated end-of-case debrief aligned to a 9-item counseling milestone checklist. Practicing emergency physicians participated in the simulations and completed post-encounter surveys assessing usefulness, realism, educational value, and perceived adequacy of feedback compared with standardized patient actors.
Results
Twenty-four EM physicians participated, with 19 completing post-simulation surveys. Survey data were analyzed descriptively. Among respondents, 94.8% rated the chatbot as at least somewhat useful and 94.8% as at least somewhat realistic. Nearly 90% perceived the simulation as educationally valuable, and 78.9% reported that the absence of a standardized patient actor did not diminish feedback quality.
Conclusion
Participants reported high perceived value of a large language model–based simulated patient for communication skills training. Most found the experience useful, realistic, and educationally valuable, and did not feel that the lack of a human standardized patient limited feedback quality. These findings suggest that ChatGPT-facilitated simulation may serve as a feasible and adaptable adjunct to traditional simulation methods for communication skills training in emergency medicine. Limitations include a small, single-site cohort and use of an internally developed survey tool.
CME
1.25

Disclosures

Access the following link to view disclosures of session presenters, presenting authors, organizers, moderators, and planners:

Log in

See all the content and easy-to-use features by logging in or registering!