AUTHOR=Makaruk Hubert , Porter Jared M. , Webster E. Kipling , Makaruk Beata , Tomaszewski Paweł , Nogal Marta , Gawłowski Daniel , Sobański Łukasz , Molik Bartosz , Sadowski Jerzy TITLE=Artificial intelligence-enhanced assessment of fundamental motor skills: validity and reliability of the FUS test for jumping rope performance JOURNAL=Frontiers in Artificial Intelligence VOLUME=Volume 8 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2025.1611534 DOI=10.3389/frai.2025.1611534 ISSN=2624-8212 ABSTRACT=IntroductionWidespread concerns about children’s low fundamental motor skill (FMS) proficiency highlight the need for accurate assessment tools to support structured instruction. This study examined the validity and reliability of an AI-enhanced methodology for assessing jumping rope performance within the Fundamental Motor Skills in Sport (FUS) test.MethodsA total of 236 participants (126 primary school students aged 7–14; 110 university sports students aged 20–21) completed jumping rope tasks recorded via the FUS mobile app integrated with an AI model evaluating five process-oriented performance criteria. Concurrent validity and inter-rater reliability were examined by comparing AIgenerated assessments with scores from two expert evaluators. Intra-rater reliability was also assessed through reassessment of video trials after a 3-week interval.ResultsResults revealed excellent concurrent validity and inter-rater reliability for the AI model compared with expert ratings (ICC = 0.96; weighted kappa = 0.87). Agreement on individual criteria was similarly high (Cohen’s kappa = 0.83–0.87). Expertadjusted AI scores further improved reliability (ICC = 0.98). Intrarater reliability was also excellent, with perfect agreement for AIgenerated scores (ICC = 1.00; kappa = 1.00).ConclusionsThese findings demonstrate that AI-based assessment offers objective, reliable, and scalable evaluation, enhancing accuracy and efficiency of FMS assessment in education and research.