AUTHOR=Alharbi Wardah , Alfayez Asma Abdullah TITLE=Explainable artificial intelligence in pancreatic cancer prediction: from transparency to clinical decision-making JOURNAL=Frontiers in Oncology VOLUME=Volume 15 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/oncology/articles/10.3389/fonc.2025.1720039 DOI=10.3389/fonc.2025.1720039 ISSN=2234-943X ABSTRACT=Background/ObjectivesPancreatic cancer (PC) remains among the most lethal malignancies worldwide, with a persistently low 5-year survival rate despite advances in systemic therapies and surgical innovation. Machine learning (ML) has emerged as a transformative tool for early detection, prognostic modelling, and treatment planning in PC, yet widespread clinical use is constrained by the “black box” nature of many models. Explainable artificial intelligence (XAI) offers a pathway to reconcile model accuracy with clinical trust, enabling transparent, reproducible, and clinically meaningful predictions.MethodsWe reviewed literature from 2020–2025, focusing on ML-based studies in PC that incorporated or discussed XAI techniques. Methods were grouped by model architecture, data modality, and interpretability framework. We synthesized findings to evaluate the technical underpinnings, interpretability outcomes, and clinical relevance of XAI applications.ResultsAcross 21 studies on ML in PC, only three studies explicitly integrated XAI, primarily using SHAP and SurvSHAP. These methods helped identify key biomarkers, comorbidities, and survival predictors, while enhancing clinician trust. XAI approaches were categorized by staging (ante-hoc vs. post-hoc), compatibility (model-agnostic vs. model-specific), and scope (local vs. global explanations). Barriers to adoption included methodological instability, limited external validation, weak workflow integration, and lack of standardized evaluation.ConclusionsXAI has the potential to serve as a cornerstone for advancing transparent, trustworthy ML in PC prediction. By clarifying model reasoning, XAI enhances clinical interpretability and regulatory readiness. This review provides a technical and clinical synthesis of current XAI practices, positioning explainability as essential for translating ML innovations into actionable oncology tools.