Comparative Evaluation of Ambient Artificial Intelligence Scribe Tools in Emergency Care: A Prospective Crossover Study of DAX and Abridge

Comparative Evaluation of Ambient Artificial Intelligence Scribe Tools in Emergency Care: A Prospective Crossover Study of DAX and Abridge

Wednesday, May 20, 2026 4:48 PM to 4:56 PM · 8 min. (America/New_York)
International Hall 9: Level I
Abstracts
Informatics/Data Science/AI

Information

Abstract Number
663
Background and Objectives
Documentation burden in the emergency department contributes to clinician burnout. Ambient artificial intelligence scribe tools have emerged to reduce documentation effort by passively generating clinical notes from patient encounters. Comparative data evaluating these tools in emergency department workflows are limited. We conducted a prospective crossover study to compare DAX and Abridge ambient tools with respect to perceived work burden, usability, documentation quality, satisfaction, and overall preference.
Methods
We conducted a single-site, prospective crossover study in an academic emergency department from April to June 2025. Twenty emergency medicine faculty physicians were enrolled; 18 completed both study phases after exclusion of two participants who did not use either tool. Participants used DAX and Abridge in alternating three-week periods. Surveys assessed perceived work burden, usability, documentation quality, and satisfaction. Adoption was defined as the proportion of authored notes containing ambient artificial intelligence output. Paired Wilcoxon signed-rank tests compared survey responses between tools. Linear mixed-effects models adjusted for order of tool exposure, adoption rate, and baseline characteristics.
Results
Both DAX and Abridge demonstrated high adoption and usability. DAX was associated with greater perceived reduction in overall work burden compared with Abridge (median 1.5 vs 2; p = 0.025). Usability scores were high and comparable, with identical median System Usability Scale scores (73.5 vs 73.5; p = 0.94). Documentation quality scores measured using a modified Physician Documentation Quality Instrument favored DAX (median 39 vs 36.5; p = 0.011). Satisfaction ratings were higher for DAX in unadjusted analyses (median likelihood to recommend 9 vs 7.5; p = 0.015); however, adjusted models demonstrated that these differences were primarily attributable to order effects rather than inherent differences between the tools.
Conclusion
In this crossover study, both DAX and Abridge were highly usable, reduced perceived documentation burden, and preserved documentation quality in the emergency department These findings support the feasibility and perceived value of ambient artificial intelligence scribes in emergency care and underscore the need for larger, longer-duration, multi-site studies incorporating objective outcomes.
CME
1.25

Disclosures

Access the following link to view disclosures of session presenters, presenting authors, organizers, moderators, and planners:

Log in

See all the content and easy-to-use features by logging in or registering!