AUTHOR=Li Wei , Sun Yuan , Xu Haibing , Shang Wenwen , Dong Anding TITLE=Systematic Review and Meta-Analysis of American College of Radiology TI-RADS Inter-Reader Reliability for Risk Stratification of Thyroid Nodules JOURNAL=Frontiers in Oncology VOLUME=Volume 12 - 2022 YEAR=2022 URL=https://www.frontiersin.org/journals/oncology/articles/10.3389/fonc.2022.840516 DOI=10.3389/fonc.2022.840516 ISSN=2234-943X ABSTRACT=Purpose: To investigate the inter-reader agreement of using the American College of Radiology (ACR) Thyroid Imaging Reporting and Data System (TI-RADS) for risk stratification of thyroid nodules. Methods: A literature search of Web of Science, PubMed, Cochrane Library, EMBASE, and Google Scholar was performed to identify eligible articles published from inception until October 31, 2021. We included studies reporting inter-reader agreement of different radiologists who applied ACR TI-RADS for the classification of thyroid nodules. Quality assessment of the included studies was performed with the Quality Assessment of Diagnostic Accuracy Studies-2 tool and Guidelines for Reporting Reliability and Agreement Studies. The summary estimates of the inter-reader agreement were pooled with the random-effect model, and multiple subgroup analysis and meta-regression were also performed. Results: A total of 13 studies comprising 5238 nodules were included in the current meta-analysis and systematic review. The pooled inter-reader agreement for overall ACR TI-RADS classification was moderate (κ=0.51, 95% CI 0.42-0.59). There was substantial heterogeneity was presented throughout studies, and meta-regression analyses suggested that the malignant rate was the significant factor. Regarding the US features, the best inter-reader agreement was composition (κ=0.58, 95% CI 0.53-0.63), follows were shape (κ=0.57, 95% CI 0.41-0.72), echogenicity (κ=0.50, 95% CI 0.40-0.60), echogenic foci (κ=0.44, 95% CI 0.36-0.53), and margin (κ=0.34, 95% CI 0.24-0.44). Conclusions: The ACR TI-RADS demonstrated moderate inter-reader agreement between radiologists for the overall classification. However, the US feature of margin only showed fair inter-reader reliability among different observers.