

Artificial Intelligence vs Doctors in Managing Intake and Triage (ADMIT)
Wednesday, May 20, 2026 5:04 PM to 5:12 PM · 8 min. (America/New_York)
International Hall 9: Level I
Abstracts
Operations/Quality Improvement/Administration
Information
Abstract Number
99
Background and Objectives
Timely assignment of an inpatient admitting service is a core Emergency Department (ED) operational function affecting patient flow, boarding, and interdepartmental coordination. In our academic ED with multiple admitting services, complex admission criteria frequently lead to disagreement and require escalation to an administrative adjudicator physician, termed the Capacity MD (CapMD). Variability in how attending ED physicians (AEDPs) apply admission guidelines represents an important source of operational inefficiency. Artificial intelligence (AI) based decision support may offer a scalable approach to optimizing admission decisions.
Methods
We conducted a cross-sectional, vignette-based study in a tertiary care academic ED with 120,000 annual visits. Using an institutional guideline document, a convenience sample of AEDPs selected an admit service for 10 standardized clinical vignettes representing ambiguous admissions. CapMD determinations served as the gold standard. Two AI models, SAFE AI and NotebookLM, were provided the same guidelines and vignettes and asked to select an admit service. The primary outcome was AEDP and AI accuracy vs CapMD decisions. Secondary outcomes included time to decision. Agreement among AEDPs was measured using Fleiss’ κ. Because AI models were evaluated as single entities, accuracy was evaluated using item-level exact inference comparing each model’s correctness on individual cases to the AEDP distribution.
Results
11 AEDPs provided responses, with a median of 12 years since residency graduation (IQR 6.5–16) and 8 years as local faculty (IQR 6.5–9.6). 72% identified as male. Across 110 case-level responses, AEDPs selected the CapMD-designated admit service with a median accuracy of 50.0% (IQR 42.9–57.1) and low inter-rater agreement (Fleiss’ κ = 0.09). SAFE AI achieved 60% and NotebookLM achieved 80% match with CapMD determinations, respectively. AEDPs required a median of 38 seconds (IQR 30–65) per vignette, compared with 139.7 seconds for SAFE AI and 1.9 seconds for NotebookLM.
Conclusion
There was significant variability among AEDPs in selecting the correct admitting service despite standardized criteria. AI-based models applying the same admission rules demonstrated accuracy comparable to, and in some cases exceeding, that of AEDPs, suggesting AI-assisted decision support may improve ED outflow in complex academic settings.
CME
1.25
Disclosures
Access the following link to view disclosures of session presenters, presenting authors, organizers, moderators, and planners:
