AUTHOR=Guo Shunchao , Wang Lihui , Chen Qijian , Wang Li , Zhang Jian , Zhu Yuemin TITLE=Multimodal MRI Image Decision Fusion-Based Network for Glioma Classification JOURNAL=Frontiers in Oncology VOLUME=Volume 12 - 2022 YEAR=2022 URL=https://www.frontiersin.org/journals/oncology/articles/10.3389/fonc.2022.819673 DOI=10.3389/fonc.2022.819673 ISSN=2234-943X ABSTRACT=Purpose: Glioma is the most common primary brain tumor with varying degrees of aggressiveness and prognosis. Accurate glioma classification is very important for treatment planning and prognosis prediction. The main purpose of this study is to design a novel effective algorithm for further improving the performance of glioma subtypes classification using multimodal MRI images. Method: MRI images of four modalities for 221 glioma patients are collected from Computational Precision Medicine: Radiology-Pathology 2020 challenge, including T1, T2, T1ce and Flair MRI images, to classify astrocytoma, oligodendroglioma and glioblastoma. We proposed a multimodal MRI image decision fusion-based network for improving the glioma classification accuracy. Firstly, the MRI images of each modality were input into a pre-trained tumor segmentation model to delineate the regions of tumor lesions. Then, the whole tumor regions were centrally clipped from original MRI images followed by max-min normalization. Subsequently, a deep learning-based network was designed based on unified DenseNet structure, which extracts features through a series of dense blocks. After that, two fully connected layers were used to map the features into three glioma subtypes. During the training stage, we used the images of each modality after tumor segmentation to train the network to obtain its best accuracy on our testing set. During the inferring stage, a linear weighted module based on decision fusion strategy was applied to assemble the predicted probabilities of the pre-trained models obtained in training stage. Finally, performance of our method was evaluated in terms of accuracy, AUC, sensitivity, specificity, PPV, NPV, etc. Results: The proposed method achieved an accuracy of 0.878, an AUC of 0.902, a sensitivity of 0.772, a specificity of 0.930, a PPV of 0.862, a NPV of 0.949, and a Cohen Kappa of 0.773, which showed a significantly higher performance than existing state-of-the-art methods. Conclusion: Compared with current studies, this study demonstrated the effectiveness and superiority in overall performance of our proposed multimodal MRI image decision fusion-based network method for glioma subtypes classification, which would be of enormous potential value in clinical practice.