AUTHOR=Palm Erin , Manikantan Astrit , Mahal Herprit , Belwadi Srikanth Subramanya , Pepin Mark E. TITLE=Assessing the quality of AI-generated clinical notes: validated evaluation of a large language model ambient scribe JOURNAL=Frontiers in Artificial Intelligence VOLUME=Volume 8 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2025.1691499 DOI=10.3389/frai.2025.1691499 ISSN=2624-8212 ABSTRACT=BackgroundGenerative artificial intelligence (AI) tools are increasingly being used as “ambient scribes” to generate drafts for clinical notes from patient encounters. Despite rapid adoption, few studies have systematically evaluated the quality of AI-generated documentation against physician standards using validated frameworks.ObjectiveThis study aimed to compare the quality of large language model (LLM)-generated clinical notes (“Ambient”) with physician-authored reference (“Gold”) notes across five clinical specialties using the Physician Documentation Quality Instrument (PDQI-9) as a validated framework to assess document quality.MethodsWe pooled 97 de-identified audio recordings of outpatient clinical encounters across general medicine, pediatrics, obstetrics/gynecology, orthopedics, and adult cardiology. For each encounter, clinical notes were generated using both LLM-optimized “Ambient” and blinded physician-drafted “Gold” notes, based solely on audio recording and corresponding transcripts. Two blinded specialty reviewers independently evaluated each note using the modified PDQI-9, which includes 11 criteria rated on a Likert-scale, along with binary hallucination detection. Interrater reliability was assessed using within-group interrater agreement coefficient (RWG) statistics. Paired comparisons were performed using t-tests or Mann–Whitney tests.ResultsPaired analysis of 97 clinical encounters yielded 194 notes (2 per encounter) and 388 paired reviews. Overall, high interrater agreement was observed (RWG > 0.7), with moderate concordance noted in pediatrics and cardiology. Gold notes achieved higher overall quality scores (4.25/5 vs. 4.20/5, p = 0.04), as well as superior accuracy (p = 0.05), succinctness (p < 0.001), and internal consistency (p = 0.004) compared to ambient notes. In contrast, ambient notes scored higher in thoroughness (p < 0.001) and organization (p = 0.03). Hallucinations were detected in 20% of gold notes and 31% of ambient notes (p = 0.01). Despite these limitations, reviewers overall preferred ambient notes (47% vs. 39% for gold).ConclusionLLM-generated Ambient notes demonstrated quality comparable to physician-authored notes across multiple specialties. While Ambient notes were more thorough and better organized, they were also less succinct and more prone to hallucination. The PDQI-9 provides a validated, practical framework for evaluating AI-generated clinical documentation. This quality assessment methodology can inform iterative quality optimization and support the standardization of ambient AI scribes in clinical practice.