AUTHOR=Chase Aaron , Most Amoreena , Xu Shaochen , Barreto Erin , Murray Brian , Henry Kelli , Smith Susan , Hedrick Tanner , Chen Xianyan , Li Sheng , Liu Tianming , Sikora Andrea TITLE=Large language models management of complex medication regimens: a case-based evaluation JOURNAL=Frontiers in Pharmacology VOLUME=Volume 16 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/pharmacology/articles/10.3389/fphar.2025.1514445 DOI=10.3389/fphar.2025.1514445 ISSN=1663-9812 ABSTRACT=BackgroundLarge language models (LLMs) have shown the ability to diagnose complex medical cases, but only limited studies have evaluated the performance of LLMs in the development of evidence-based treatment plans. The purpose of this evaluation was to test four LLMs on their ability to develop safe and efficacious treatment plans on complex patients managed in the intensive care unit (ICU).MethodsEight high-fidelity patient cases focusing on medication management were developed by critical care clinicians including history of present illness, laboratory values, vital signs, home medications, and current medications. Four LLMs [ChatGPT (GPT-3.5), ChatGPT (GPT-4), Claude-2, and Llama-2–70b] were prompted to develop an optimized medication regimen for each case. LLM generated medication regimens were then reviewed by a panel of seven critical care clinicians to assess safety and efficacy, as defined by medication errors identified and appropriate treatment for the clinical conditions. Appropriate treatment was measured by the average rate of clinician agreement to continue each medication in the regimen and compared using analysis of variance (ANOVA).ResultsClinicians identified a median of 4.1–6.9 medication errors per recommended regimen, and life-threatening medication recommendations were present in 16.3%–57.1% of the regimens, depending on LLM. Clinicians continued LLM-recommended medications at a rate of 54.6%–67.3%, with GPT-4 having the highest rate of medication continuation among all LLMs tested (p < 0.001) and the lowest rate of life-threatening medication errors (p < 0.001).ConclusionCaution is warranted using present LLMs for medication regimens given the number of medication errors that were identified in this pilot study. However, LLMs did demonstrate potential to serve as clinical decision support for the management of complex medication regimens given the need for domain specific prompting and testing.