

Generative Artificial Intelligence and Large Language Models: How They Work and the Case Against Their Use in Academic Medicine
Thursday, May 21, 2026 8:00 AM to 9:00 AM · 1 hr. (America/New_York)
M301: Level M
IGNITE! - SAEM
Informatics/Data Science/AI
Information
Summary
Generative AI is a field of computer sciences gaining rapid use in many contexts, including a push for inclusion in medicine. Large language models (LLMs) are a subset of generative AI that rely on processing massive datasets of written text to generate statistical models of how language is strung together, all in an effort to output something that wounds like natural human speech–and is hopefully correct.
To start this IGNITE talk, I will cover the basic concepts of how these neural nets ( the statistical models) are made and used. Concepts will include how words become vectors a computer can understand, reward functions and the notion of local and global maximums, and an overview of the schema that underlies a neural net. These topics can all be easily distilled into a brief talk, but are crucial for understanding the limitations of LLMs when applied to generative AI in academic writing. (As far as credentials to discuss this complex and controversial topic, I have a bachelor’s degree in computer sciences with an emphasis in dataset handling and algorithm development geared towards the biological sciences.)
I will then highlight three key reasons to be opposed to the use of large language models in academic medicine. First, there is the issue of wrongness, sometimes called “AI hallucination.” Briefly, the distillation of words into vectors combined with the way an LLM interacts with its neural net strips meaning and reduces language to statistical associations. Second, there is already discussion about the proliferation of low quality research in the field of medicine as a byproduct of the defacto requirements for each stage of training. By allowing, or in some cases encouraging, the use of LLMs in academic medicine, there will be a further drive towards more papers that all sound alike and without any novel insight, because that is what the statistical models are designed to do. Third, as a slight departure from the technical discussion above, is the notion that bypassing the human writing process means a loss of human learning. I will very briefly highlight recent research surrounding cognitive deficits in generative AI users. Academic writing–and the literature analysis needed to coherently argue in a paper–has historically ensured that authors almost by definition deeply understood their topics, and the offloading of this to an LLM has the potential to erase this mechanism for topical expertise.
CME
1.0
Disclosures
Access the following link to view disclosures of session presenters, presenting authors, organizers, moderators, and planners:
