AUTHOR=Xiang Liangliang , Gao Zixiang , Yu Peimin , Fernandez Justin , Gu Yaodong , Wang Ruoli , Gutierrez-Farewik Elena M. TITLE=Explainable artificial intelligence for gait analysis: advances, pitfalls, and challenges - a systematic review JOURNAL=Frontiers in Bioengineering and Biotechnology VOLUME=Volume 13 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/bioengineering-and-biotechnology/articles/10.3389/fbioe.2025.1671344 DOI=10.3389/fbioe.2025.1671344 ISSN=2296-4185 ABSTRACT=Machine learning (ML) has emerged as a powerful tool to analyze gait data, yet the “black-box” nature of many ML models hinders their clinical application. Explainable artificial intelligence (XAI) promises to enhance the interpretability and transparency of ML models, making them more suitable for clinical decision-making. This systematic review, registered on PROSPERO (CRD42024622752), assessed the application of XAI in gait analysis by examining its methods, performance, and potential for clinical utility. A comprehensive search across four electronic databases yielded 3676 unique records, of which 31 studies met inclusion criteria. These studies were categorized into model-agnostic (n = 16), model-specific (n = 12), and hybrid (n = 3) interpretability approaches. Most applied local interpretation methods such as SHAP and LIME, while others used Grad-CAM, attention mechanisms, and Layer-wise Relevance Propagation. Clinical populations studied included Parkinson’s disease, stroke, sarcopenia, cerebral palsy, and musculoskeletal disorders. Reported outcomes highlighted biomechanically relevant features such as stride length and joint angles as key discriminators of pathological gait. Overall, the findings demonstrate that XAI can bridge the gap between predictive performance and interpretability, but significant challenges remain in standardization, validation, and balancing accuracy with transparency. Future research should refine XAI frameworks and assess their real-world clinical applicability across diverse gait disorders.