AUTHOR=Yu Ziyang , Wei Zhijun , Wang Mini Han , Cui Jiazheng , Tan Jiaxiang , Xu Yang TITLE=Quantitative evaluation of meibomian gland dysfunction via deep learning-based infrared image segmentation JOURNAL=Frontiers in Artificial Intelligence VOLUME=Volume 8 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2025.1642361 DOI=10.3389/frai.2025.1642361 ISSN=2624-8212 ABSTRACT=In recent years, numerous advanced image segmentation algorithms have been employed in the analysis of meibomian glands (MG). However, their clinical utility remains limited due to insufficient integration with the diagnostic and grading processes of meibomian gland dysfunction (MGD). To bridge this gap, the present study leverages three state-of-the-art deep learning models—DeepLabV3+, U-Net, and U-Net++—to segment infrared MG images and extract quantitative features for MGD diagnosis and severity assessment. A comprehensive set of morphological (e.g., gland area, width, length, and distortion) and distributional (e.g., gland density, count, inter-gland distance, disorder degree, and loss ratio) indicators were derived from the segmentation outcomes. Spearman correlation analysis revealed significant positive associations between most indicators and MGD severity (correlation coefficients ranging from 0.26 to 0.58; p < 0.001), indicating their potential diagnostic value. Furthermore, Box plot analysis highlighted clear distribution differences in the majority of indicators across all grades, with medians shifting progressively, interquartile ranges widening, and an increase in outliers, reflecting morphological changes associated with disease progression. Logistic regression models trained on these quantitative features yielded area under the receiver operating characteristic curve (AUC) values of 0.89 ± 0.02, 0.76 ± 0.03, 0.85 ± 0.02, and 0.94 ± 0.01 for MGD grades 0, 1, 2, and 3, respectively. The models demonstrated strong classification performance, with micro-average and macro-average AUCs of 0.87 ± 0.02 and 0.86 ± 0.03, respectively. Model stability and generalizability were validated through 5-fold cross-validation. Collectively, these findings underscore the clinical relevance and robustness of deep learning-assisted quantitative analysis for the objective diagnosis and grading of MGD, offering a promising framework for automated medical image interpretation in ophthalmology.