AUTHOR=Hao Xinqing , Li Ying , Wang Xiulin , Ma Changjun , Liu Ruichen , Jiao Yang , Dong Chunbo , Liu Jing TITLE=Multimodal radiomics of cerebellar subregions for machine learning-driven Alzheimer’s disease diagnosis JOURNAL=Frontiers in Aging Neuroscience VOLUME=Volume 17 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/aging-neuroscience/articles/10.3389/fnagi.2025.1679788 DOI=10.3389/fnagi.2025.1679788 ISSN=1663-4365 ABSTRACT=ObjectiveThis study aimed to develop a machine learning model based on multimodal radiomics features from cerebellar subregions, utilizing the complementarity of cerebellar structural and metabolic imaging data for accurate diagnosis of Alzheimer’s disease (AD).MethodsA total of 164 cognitively normal (CN) subjects and 146 AD patients from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database were included. All participants had 3DT1-weighted magnetic resonance imaging (3DT1W MRI) and [18F]fluorodeoxyglucose positron emission tomography ([18F]FDG PET) imaging data. The cerebellum was divided into 26 subregions, and radiomics features were extracted from different cerebellar regions of these two modality images, respectively. After feature selection, single-modality ([18F]FDG PET, 3DT1W MRI) and multimodal ([18F]FDG PET + 3DT1W MRI) random forest classification models were constructed. Model performance and clinical value were assessed using area under the curve (AUC), calibration curves, and decision curve analysis (DCA). In addition, we also used Shapley Additive exPlanations (SHAP) to clarify the contributions of features, thereby enhancing the interpretability of the model.ResultsAll three models could effectively diagnose AD, with the multimodal model showing the best performance. In the independent test set, the multimodal model achieved an AUC of 0.903, which was higher than the single-modality models based on [18F]FDG PET (AUC = 0.842) and 3DT1W MRI (AUC = 0.804). The calibration curves and DCA demonstrated that all three models had good calibration and clinical applicability, especially the multimodal model. SHAP analysis of the multimodal model revealed that among the 15 selected features, the top seven features with the highest SHAP values were derived from [18F]FDG PET images, with R_FDG_CER_III_original_firstorder_90Percentile and R_FDG_CER_VI_original_firstorder_Median being the two most important features for distinguishing AD from CN.ConclusionThe multimodal radiomics model based on cerebellar subregions, which integrates [18F]FDG PET and 3DT1W MRI data, can effectively diagnose AD and provide potential biomarkers for clinical applications.