<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN" "JATS-journalpublishing1-3-mathml3.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ali="http://www.niso.org/schemas/ali/1.0/" article-type="review-article" dtd-version="1.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neurol.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Neurology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neurol.</abbrev-journal-title>
</journal-title-group>
<issn pub-type="epub">1664-2295</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fneur.2026.1759459</article-id>
<article-version article-version-type="Version of Record" vocab="NISO-RP-8-2008"/>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Mini Review</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Convolutional neural networks: applications, challenges and future prospects in brain tumor research</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Zhang</surname>
<given-names>Peng</given-names>
</name>
<xref ref-type="aff" rid="aff1"/>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/3302816"/>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; review &#x0026; editing" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-review-editing/">Writing &#x2013; review &#x0026; editing</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="supervision" vocab-term-identifier="https://credit.niso.org/contributor-roles/supervision/">Supervision</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; original draft" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-original-draft/">Writing &#x2013; original draft</role>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Yang</surname>
<given-names>Zhen</given-names>
</name>
<xref ref-type="aff" rid="aff1"/>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; original draft" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-original-draft/">Writing &#x2013; original draft</role>
</contrib>
</contrib-group>
<aff id="aff1"><institution>Department of Neurosurgery, The Second People&#x2019;s Hospital of Hefei, Hefei Hospital Affiliated to Anhui Medical University</institution>, <city>Hefei</city>, <country country="cn">China</country></aff>
<author-notes>
<corresp id="c001"><label>&#x002A;</label>Correspondence: Peng Zhang, <email xlink:href="mailto:a271664901@outlook.com">a271664901@outlook.com</email></corresp>
</author-notes>
<pub-date publication-format="electronic" date-type="pub" iso-8601-date="2026-02-23">
<day>23</day>
<month>02</month>
<year>2026</year>
</pub-date>
<pub-date publication-format="electronic" date-type="collection">
<year>2026</year>
</pub-date>
<volume>17</volume>
<elocation-id>1759459</elocation-id>
<history>
<date date-type="received">
<day>03</day>
<month>12</month>
<year>2025</year>
</date>
<date date-type="rev-recd">
<day>01</day>
<month>02</month>
<year>2026</year>
</date>
<date date-type="accepted">
<day>10</day>
<month>02</month>
<year>2026</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2026 Zhang and Yang.</copyright-statement>
<copyright-year>2026</copyright-year>
<copyright-holder>Zhang and Yang</copyright-holder>
<license>
<ali:license_ref start_date="2026-02-23">https://creativecommons.org/licenses/by/4.0/</ali:license_ref>
<license-p>This is an open-access article distributed under the terms of the <ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License (CC BY)</ext-link>. The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>As one of the most common malignant tumors in the central nervous system, brain tumors can cause neurological dysfunction and functional impairment. The early precise diagnosis, therapeutic efficacy evaluation, and prognosis prediction of brain tumors are of crucial significance for the formulation of treatment plans and the extension of survival periods for patients. In recent years, artificial intelligence (AI) has been applied in numerous biomedical fields, including the identification, diagnosis, and treatment of brain tumors. Deep learning (DL) is such an AI tool, and convolutional neural networks (CNNs) are widely used deep learning methods. With their powerful capabilities in automatic feature extraction and pattern recognition of images, CNNs have demonstrated great potential in the analysis of medical images of brain tumors. This paper systematically reviews the research progress of CNNs in brain tumors (tumor region identification and segmentation, benign and malignant classification, IDH mutation status prediction, and differentiation of pseudo-progression and recurrence), and deeply analyzes the current challenges and future development directions, aiming to provide a cutting-edge reference for neurosurgeons and researchers.</p>
</abstract>
<kwd-group>
<kwd>brain tumor</kwd>
<kwd>convolutional neural network</kwd>
<kwd>image segmentation</kwd>
<kwd>magnetic resonance</kwd>
<kwd>prognosis prediction</kwd>
</kwd-group>
<funding-group>
<funding-statement>The author(s) declared that financial support was not received for this work and/or its publication.</funding-statement>
</funding-group>
<counts>
<fig-count count="0"/>
<table-count count="0"/>
<equation-count count="0"/>
<ref-count count="44"/>
<page-count count="6"/>
<word-count count="5781"/>
</counts>
<custom-meta-group>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Artificial Intelligence in Neurology</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="sec1">
<label>1</label>
<title>Introduction</title>
<p>Brain tumors are a heterogeneous group of common intracranial tumors, leading to significant mortality and morbidity. Malignant brain tumors are among the most aggressive and fatal tumors in all age groups (<xref ref-type="bibr" rid="ref1">1</xref>). According to the 2021 World Health Organization (WHO) classification of central nervous system tumors, brain tumors are classified into four grades (I to IV), with increasing malignancy and worsening prognosis (<xref ref-type="bibr" rid="ref2">2</xref>). In fact, in clinical practice, tumor type and grade influence treatment options. Among WHO grade IV tumors, glioblastoma is the most aggressive primary brain tumor, with a median survival of only 12&#x2013;15&#x202F;months after diagnosis (<xref ref-type="bibr" rid="ref3">3</xref>). In recent years, preoperative diagnosis and treatment of brain tumors have mainly relied on imaging examinations, including auxiliary examinations such as computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasound. Most medical image interpretations are performed by radiologists. Researchers have recognized the necessity of computer-assisted intervention in medical image analysis due to the wide range of pathologies, early lesion missed diagnosis, and inter-individual differences in radiological imaging interpretations by different experts. Deep learning has enhanced the ability to recognize, classify, and quantify medical imaging patterns and helps address this need. Empirical studies have shown that convolutional neural networks (CNNs) can identify hierarchical image features and derive higher-level features from lower-level ones (<xref ref-type="bibr" rid="ref4">4</xref>). CNNs have demonstrated effectiveness in medical image analysis, including image segmentation, image registration, image fusion, image annotation, computer-aided diagnosis and prognosis, lesion/marker detection, and microscopic imaging analysis. In this review, we aim to explore how convolutional neural networks are revolutionizing the diagnosis and treatment of brain tumors.</p>
</sec>
<sec id="sec2">
<label>2</label>
<title>The advantages of convolutional neural networks in medical image analysis</title>
<p>Pathological assessment of tissue samples is the gold standard for tumor diagnosis and grading. However, a non-invasive tool capable of accurately classifying tumor types and inferring their grades would be highly desirable. Although several non-invasive imaging modalities can visualize brain tumors, such as MRI, which conveys macroscopic information about the location, size, extent, characteristics, and relationship with surrounding structures of the lesion. Besides structural information, MRI can also assess microstructural features, such as the structure of tumor cells, microvascular structure, and perfusion. However, they are limited in evaluating microscopic changes of tumors. With the rapid development of artificial intelligence and machine learning technologies, deep learning, especially convolutional neural networks, has become a core technology in the field of medical image analysis. CNN can simulate the hierarchical structure of the human brain&#x2019;s visual cortex and utilize core modules such as convolution operations and pooling operations to automatically extract features from the original image from low-level to high-level, without the need for manual design of feature extractors, effectively solving problems such as feature engineering relying on professional knowledge and weak generalization ability in traditional machine learning (<xref ref-type="bibr" rid="ref5">5</xref>). Compared with traditional methods, CNN has advantages in medical image analysis such as automated feature extraction, high recognition accuracy, and wide application scenarios. It can quickly process high-dimensional medical image data, discover subtle features that are difficult for human vision to perceive, and achieve precise localization and classification of lesion areas through large-scale data training and optimization. It can meet the needs of multiple aspects such as diagnosis, efficacy evaluation, and prognosis prediction. These advantages have provided a new technical path for brain tumor medical image analysis.</p>
</sec>
<sec id="sec3">
<label>3</label>
<title>The advantages of convolutional neural networks in the field of brain oncology</title>
<p>Convolutional neural networks allow the use of a large amount of data from radiological images and obtain quantitative features of tumor histology and biomarkers in a non-invasive manner, even attempting to predict molecular characteristics and prognosis, and providing more personalized treatment (<xref ref-type="bibr" rid="ref6">6</xref>). CNN has the potential to improve accuracy and efficiency, and is useful in differential diagnosis in cases that are difficult to compare through MRI assessment between glioblastoma multiforme and primary central nervous system lymphoma. Tangsrivimol et al. (<xref ref-type="bibr" rid="ref7">7</xref>) used the EfficientNetB4 architecture within the CNN framework to analyze 320 patients with suspected glioblastoma multiforme or primary central nervous system lymphoma who had contrast-enhanced T1-weighted images. Additionally, in the early diagnosis of brain metastases, in stereotactic radiosurgery where a strict understanding of the number and location of metastases is required, the 2D and 3D texture analysis model using the T1-weighted imaging post-contrast sequence can be used to determine whether the tumor is a metastasis of lung cancer, melanoma, or breast cancer, as it can show the differences in the local environment. It is also significant for metastatic lesions and peritumoral edema, for example; Chou et al. (<xref ref-type="bibr" rid="ref8">8</xref>) studied the automatic identification and quantification of metastatic brain tumors and peritumoral edema based on deep learning neural networks, and the Dice similarity coefficients of the segmentation models for the separation of metastatic tumors and brain edema were 71.6 and 85.1%, respectively. The research results indicate good diagnostic efficacy for the identification of metastatic tumors and peritumoral edema. However, computer-aided design must be used in an appropriate environment because if the sensitivity threshold is too low, there may be overdiagnosis. If the sensitivity threshold is too high, small lesions may not be detected. For traditional surgical treatment of brain tumors, including robot-guided surgery, there are limitations such as the inaccuracy of lesion extension. CNN can provide algorithms to reduce these limitations, precisely delineate the size and location of the lesion, and guide the surgeon in real time.</p>
</sec>
<sec id="sec4">
<label>4</label>
<title>The application of convolutional neural networks in brain tumor image segmentation and classification</title>
<sec id="sec5">
<label>4.1</label>
<title>Tumor region identification and segmentation</title>
<p>The precise identification and segmentation of tumor regions is the foundation of brain tumor diagnosis and treatment, providing crucial basis for tumor staging, treatment plan formulation, and efficacy evaluation. CNN, with its powerful feature extraction and pixel-level classification capabilities, has become the core technology for brain tumor segmentation, and is most widely used in MRI image segmentation (<xref ref-type="bibr" rid="ref9">9</xref>). MRI is the preferred imaging modality for brain tumor diagnosis, with advantages in multi-sequence imaging (such as T1-weighted, T1-enhanced, T2-weighted, FLAIR, etc.), and different sequences can clearly display the different structural features of the tumor (such as tumor parenchyma, edema area, necrotic area). The CNN model can integrate the complementary information of multi-sequence MRI data to improve segmentation accuracy. For example, 3D U-Net processes multi-sequence MRI three-dimensional volumetric data, fully utilizing spatial information and complementary features between sequences, achieving precise segmentation of tumor core regions, edema areas, and the entire tumor area (<xref ref-type="bibr" rid="ref10">10</xref>). Additionally, multimodal data fusion strategies (such as combining MRI with PET images) have also been used to improve segmentation performance. PET images can reflect the metabolic activity of the tumor and complement the anatomical structure information of MRI, helping to accurately identify tumor invasion areas. In addition, multi-modal data fusion strategies (such as combining MRI with PET images) are also used to improve the segmentation performance. PET images can reflect the metabolic activity of tumors and complement the anatomical structure information of MRI, which is helpful to accurately identify the tumor invasion area (<xref ref-type="bibr" rid="ref11 ref12 ref13">11&#x2013;13</xref>). Convolutional neural network (CNN) has become the core technical support of brain tumor image analysis by virtue of its powerful feature extraction and spatial modeling capabilities, and has made a breakthrough in segmentation accuracy and classification efficiency. Furthermore, rational use of hidden layers is critical for improving the analytical performance of CNNs in brain tumor imaging. Shallow hidden layers capture low-level visual features, such as tumor edges, texture gradients and local pixel differences, whereas deep hidden layers aggregate and abstract high-level semantic features including tumor heterogeneity, invasion patterns and tissue boundaries between tumors and normal tissues. Cross-layer integration of hidden features, which covers multi-scale feature fusion, residual connections and hidden layer optimization with embedded attention mechanisms, enables CNNs to fully exploit multi-dimensional information from brain tumor images. This strategy effectively reduces gray-scale overlap between tumors and normal brain tissue, and improves detection accuracy for small lesions and regions with blurred boundaries. Reasonable configuration of hidden layer quantity, node scale and connection patterns balances the model feature expression ability and computational efficiency, laying a structural foundation for accurate segmentation at millimeter scale and efficient classification of brain tumors (<xref ref-type="bibr" rid="ref14">14</xref>, <xref ref-type="bibr" rid="ref15">15</xref>). Meanwhile, as a key supplementary strategy, transfer learning has been widely explored to resolve core bottlenecks in brain tumor segmentation with CNNs. By transferring pre-trained universal visual features from large-scale natural image datasets (such as ImageNet) or medical image datasets across different diseases to brain tumor segmentation tasks, it effectively mitigates the scarcity of high-quality annotated data and poor generalization of models with small sample sizes. It adopts two primary implementation approaches: first, fine-tuning pre-trained CNN backbones (such as U-Net, ResNet) to optimize feature decoding specific to tumor; second, transfer learning adaptable to domain variations, which reduces data distribution discrepancies across different centers via adversarial training and directly addresses challenges of data domain shift from multiple centers. Existing studies confirm that transfer learning significantly improves the Dice similarity coefficient of brain tumor segmentation, reduces missed detection rates for small lesions, and achieves synergistic optimization with data fusion using multi-modal information and the inherent feature extraction advantages of CNNs (<xref ref-type="bibr" rid="ref16">16</xref>, <xref ref-type="bibr" rid="ref17">17</xref>). For multi-sequence MRI data, CNN can automatically fuse modal information such as T1, T2, FLAIR, and solve the problem of gray-scale overlap between tumor and normal brain tissue, and provide millimeter-level accurate reference for surgical planning (<xref ref-type="bibr" rid="ref18">18</xref>, <xref ref-type="bibr" rid="ref19">19</xref>). At present, the technology has gradually landed in the clinical auxiliary diagnosis system, but it still needs to break through the challenges such as multi-center data domain migration and missed detection of small lesions (<xref ref-type="bibr" rid="ref20">20</xref>). In the future, by combining attention mechanism with federal learning, and further integrating optimized transfer learning strategies, CNN will further enhance the generalization ability of segmentation and classification, and become a standardized tool for early screening and accurate diagnosis and treatment of brain tumors.</p>
</sec>
<sec id="sec6">
<label>4.2</label>
<title>Classification of benign and malignant brain tumors</title>
<p>The core of the classification of benign and malignant brain tumors lies in differentiating the invasiveness and malignancy degree of the tumors. The CNN model achieves automatic classification by learning the characteristic differences of benign and malignant tumors in the images (such as the clarity of tumor boundaries, enhancement patterns, edema degree, etc.). For example, classification systems based on models like AlexNet and ResNet have achieved an accuracy of over 90% in the classification of benign and malignant tumors through feature extraction and classification of MRI images. Moreover, the CNN model combined with multimodal imaging data (such as MRI&#x202F;+&#x202F;CT) can integrate anatomical structure and density information, further improving the classification performance. In clinical applications, the rapid classification model for benign and malignant tumors can provide rapid diagnostic references for emergency patients and secure time for surgical treatment.</p>
<p>The brain tumor classification model (BCM-CNN) includes advanced three-dimensional (3D) models based on the use of enhanced convolutional neural networks (BCM-CNN). Experimental results show that BCM-CNN achieved the best results due to optimizing the hyperparameters of CNN to enhance its performance (<xref ref-type="bibr" rid="ref21">21</xref>). In terms of model architecture selection and optimization, researchers designed various efficient CNN models based on the characteristics of brain tumor images. Traditional deep CNN models (such as ResNet, DenseNet) solve the problem of gradient disappearance in deep networks through residual connections and dense connections, and can extract more abstract high-level features, suitable for overall feature analysis of tumors. The ResNet50 model, with a 50-layer network structure and residual block design, demonstrates strong feature representation ability in the discrimination of benign and malignant brain tumors. For example, Ali et al. (<xref ref-type="bibr" rid="ref22">22</xref>) studied the application of CNN in the classification of gliomas, meningiomas, and pituitary tumors. CNN models (including the Classic layer architecture and ResNet50 architecture) were trained and evaluated using an 80:20 training-test split. The results show that both architectures can accurately classify brain tumors. The accuracy of the classic layer architecture reached 94.55%, while the ResNet50 architecture surpassed it with an accuracy of 99.88%. Compared with previous studies and 99.34%, it proves that CNN, especially the ResNet50 architecture, is effective in brain tumor classification and has great potential in helping medical professionals accurately diagnose and plan treatment. Another study (<xref ref-type="bibr" rid="ref23">23</xref>) proposed a genetic algorithm (GA)-CNN hybrid for detecting glioblastoma and other intracranial benign tumors. In this method, the appropriate CNN architecture is automatically selected with the help of the genetic algorithm. Researchers were able to correctly identify gliomas, meningiomas, and pituitary tumors in 90.9 and 94.2% of cases.</p>
<p>The application of 3D CNN (3D-CNN) provides spatial dimension features for the discrimination of benign and malignant tumors, especially suitable for processing MRI volumetric data (<xref ref-type="bibr" rid="ref24">24</xref>). Unlike 2D CNN, which only uses information from a single slice, 3D-CNN extracts features in three dimensions (length, width, and depth) using 3D convolution kernels, can fully capture the morphological features, spatial distribution, and anatomical relationship with surrounding tissues of the tumor in three-dimensional space, avoiding misjudgment due to information loss between slices. Models such as 3D ResNet, 3D VNet, etc. perform well in the discrimination of benign and malignant brain tumors. Some studies have used 3D U-Net to extract features from multi-sequence MRI volumetric data and combined an adaptive weighted fusion strategy, achieving an accuracy of 96.3% in the discrimination between glioblastoma and benign meningiomas, and 91.8% in the discrimination between low-grade gliomas and benign astrocytomas, effectively solving the discrimination problems of some borderline tumors. Furthermore, Mzoughi et al. (<xref ref-type="bibr" rid="ref25">25</xref>) studied the deep multi-scale 3D convolutional neural network (3D CNN) architecture, using the entire volume T1-Gado MRI sequence to classify glioma brain tumors as low-grade gliomas (LGG) and high-grade gliomas (HGG). The quantitative evaluation conducted through the well-known benchmark (Brats-2018) proved that compared with 2D CNN variants, the proposed architecture generated the most discriminative feature maps to distinguish between LG and HG gliomas. The proposed method achieved a 96.49% overall accuracy on the validation dataset, providing promising results superior to the most recent supervised and unsupervised advanced methods. The same study, Yamashiro et al. (<xref ref-type="bibr" rid="ref26">26</xref>) developed a 3D CNN model using enhanced T1-weighted images, using the pre-trained Clara segmentation model provided by NVIDIA and the original 2D classification model to implement an automatic glioma grading system. The 3D CNN has good diagnostic value in the classification of gliomas. Zhuge et al. (<xref ref-type="bibr" rid="ref27">27</xref>) used a deep 3D convolutional neural network to automatically grade traditional MRI images, also obtaining good diagnostic value. In summary, the research indicates that CNN has a high diagnostic value in differentiating intracranial benign and malignant tumors.</p>
</sec>
<sec id="sec7">
<label>4.3</label>
<title>Prediction of IDH mutation status in brain tumors</title>
<p>Isocitrate dehydrogenase (IDH) mutations are common genetic mutations in brain tumors, closely related to tumor development, treatment response, and prognosis (<xref ref-type="bibr" rid="ref28">28</xref>). The prognosis of patients with IDH mutant brain tumors is generally better than that of wild-type patients, and they are sensitive to specific targeted therapies. Therefore, the precise prediction of IDH mutation status is of great significance for the formulation of treatment plans (<xref ref-type="bibr" rid="ref29">29</xref>, <xref ref-type="bibr" rid="ref30">30</xref>). Traditional IDH mutation detection relies on pathological biopsy, which is an invasive operation and has the risk of sampling bias. However, the image-based radiomics method based on CNN can predict the IDH mutation status without invasive imaging data. The CNN model learns the feature differences between IDH mutant and wild-type brain tumors in imaging (such as tumor morphology, enhancement pattern, signal intensity, etc.) to construct a prediction model (<xref ref-type="bibr" rid="ref31">31</xref>). For example, a CNN model based on MRI T2-FLAIR sequence, by extracting deep features of the tumor area, has an accuracy of IDH mutation status prediction over 80%; a CNN model combining multi-sequence MRI data and clinical features (3D U-Net) can increase the prediction accuracy to over 85%. Additionally, some studies have further improved the prediction specificity and sensitivity by partitioning the tumors (such as the core area and edema area) and extracting features separately and integrating them (<xref ref-type="bibr" rid="ref32">32</xref>). Currently, CNN still faces some challenges in IDH mutation status prediction: first, there are differences in imaging equipment and scanning parameters among different hospitals, resulting in inconsistent data distribution and affecting the generalization ability of the model; second, the imaging features of some tumors have a weak correlation with the IDH mutation status, making it difficult to extract effective predictive features; third, there is a lack of large-scale, multi-center labeled datasets for model training.</p>
</sec>
</sec>
<sec id="sec8">
<label>5</label>
<title>Convolutional neural network in the differentiation of pseudo-progression and recurrence in brain tumor treatment</title>
<p>The differentiation between pseudo-progression (PsP) and recurrence (TR) after brain tumor treatment is a difficult issue in clinical diagnosis. PsP refers to the imaging enhancement resembling a tumor after radiotherapy, usually occurring within 3&#x2013;6&#x202F;months after treatment, and does not require special treatment; while tumor recurrence requires timely adjustment of the treatment plan (<xref ref-type="bibr" rid="ref33 ref34 ref35">33&#x2013;35</xref>). Both PsP and TR show similar appearances on conventional MRI images, and traditional differentiation methods (such as enhanced MRI, PET-CT) have limited accuracy, and PET-CT has the problem of radiation exposure. CNN, with its powerful feature extraction ability, provides a new solution for non-invasive and precise differentiation of PsP and TR. The CNN model learns the potential feature differences between PsP and TR in imaging (such as texture features of the enhanced area, signal uniformity, dynamic enhancement curves, etc.) to achieve automatic differentiation. For example, the CNN model based on dynamic contrast-enhanced MRI (DCE-MRI) can achieve a PsP/TR differentiation accuracy of over 85% by extracting hemodynamic and texture features of the tumor area; the CNN model combining MRI multi-sequence data (such as T1 enhancement, FLAIR, DWI) can integrate multi-dimensional information such as anatomical structure, water molecule diffusion, and inflammatory response, and the differentiation accuracy can reach over 90%. Ying et al. (<xref ref-type="bibr" rid="ref36">36</xref>) retrospectively investigated 234 patients who underwent radiotherapy after glioma resection and had suspected recurrence lesions detected in follow-up MRI. They used different convolutional neural network (CNN) models to learn the radiological features indicating glioma recurrence and radiation necrosis from MRI scans. In the evaluated CNN models, ResNet10 showed the highest sensitivity (0.78), specificity (0.94), accuracy (0.91), and AUC value of 0.83. The conclusion is that the ResNet10 CNN model shows good performance in conventional MRI scans and is highly applicable in clinical settings. These findings help improve the diagnostic accuracy of conventional MRI in differentiating glioma recurrence and radiation necrosis.</p>
</sec>
<sec id="sec9">
<label>6</label>
<title>Convolutional neural networks: challenges and future prospects in the field of brain tumors</title>
<p>Although CNN has made remarkable progress in brain tumors, it still faces some challenges. Firstly, the data of rare brain tumor cases are scarce, which leads to the poor performance of the model in distinguishing minority tumors; Secondly, the heterogeneity of image data (differences in equipment, parameters and scanning sequence) will still affect the cross-center generalization ability of the model (<xref ref-type="bibr" rid="ref37">37</xref>); Finally, the &#x201C;black box&#x201D; characteristics of the model lead to the lack of interpretability of the diagnosis results, and it is difficult to gain the wide trust of clinicians (<xref ref-type="bibr" rid="ref38">38</xref>). Convolutional neural network (CNN) is reshaping the clinical practice and research paradigm of brain tumor science with the power of technological innovation. Its future prospect focuses on three directions: accuracy, integration and generalization, which is expected to completely change the pattern of brain tumor diagnosis and treatment. At the diagnostic level, CNN will break through the limitation of single mode and build a multi-dimensional fusion framework of &#x201C;image-gene-clinic&#x201D; (<xref ref-type="bibr" rid="ref39">39</xref>). By integrating MRI multi-sequence data and genomic characteristics, the noninvasive prediction of molecular markers such as IDH mutation and 1p/19q co-deletion was realized, and the accuracy was further improved on the basis of the existing AUC value of 0.92, replacing some invasive biopsies. At the same time, it can be explained that the integration of AI (XAI) technology will solve the &#x201C;black box&#x201D; problem, and with the help of Grad-CAM attention heat map and other tools, the model can directly mark suspicious lesions and enhance the trust of clinicians. In the field of treatment planning, CNN will realize the leap from static evaluation to dynamic prediction. Combined with longitudinal image data and continuous learning algorithm, the model can capture the tumor evolution trajectory and false progress risk in advance, and provide real-time guidance for accurate delineation of radiotherapy target area and determination of surgical boundary (<xref ref-type="bibr" rid="ref40">40</xref>). The landing of augmented reality surgical navigation can project the segmentation results of CNN to the surgical field of vision in real time, which can protect the functional brain area while maximizing the tumor resection rate and reduce the risk of recurrence (<xref ref-type="bibr" rid="ref41">41</xref>). In the future, CNN will deeply integrate into the whole cycle management of brain tumors, from early screening and accurate classification to prognosis prediction and personalized treatment plan generation to form a closed loop. With the improvement of man&#x2013;machine cooperation mechanism, this technology will break through the boundary of laboratory and become a core auxiliary tool for clinicians, which will inject continuous motivation into improving the survival rate and quality of life of patients with brain tumors (<xref ref-type="bibr" rid="ref42">42</xref>).</p>
</sec>
<sec sec-type="conclusions" id="sec10">
<label>7</label>
<title>Conclusion</title>
<p>Convolutional neural network (CNN) has shown remarkable value in the image analysis of brain tumors, and has become a key tool for auxiliary diagnosis and research. Its core advantage is that it can automatically extract layered features from multi-modal MRI and other images, effectively identify the heterogeneous features of tumors, such as irregular boundaries, necrotic areas and enhancement patterns, and achieve high-precision tumor segmentation, classification and grading. However, the clinical transformation of this technology still faces multiple challenges. First of all, the model is highly dependent on large-scale and high-quality labeled data, while the medical image data is difficult to obtain, the labeling cost is high and there are differences among experts, which may introduce bias. Secondly, the &#x201C;black box&#x201D; feature of CNN makes the decision-making process difficult to explain, which becomes an important obstacle in medical scenes that require high credibility. In addition, the differences of equipment and scanning protocols in different medical institutions affect the generalization ability of the model, and it is necessary to improve the accuracy through cross-center data training and standardization. The future development direction should focus on developing lightweight real-time diagnosis model to meet the needs of clinical work; and promote multi-center and prospective clinical verification to ensure the safety and reliability of the algorithm. In addition, integrating artificial intelligence into neurosurgery is the next step in the new era of medicine, which allows more personalized patient treatment and management, thus bringing better patient results (<xref ref-type="bibr" rid="ref43">43</xref>, <xref ref-type="bibr" rid="ref44">44</xref>). Medical professionals agree that artificial intelligence is a clear and valuable tool for nerve intervention and surgery. Generally speaking, CNN provides a powerful tool for accurate diagnosis and treatment of brain tumors, but its comprehensive integration into clinical practice still depends on the coordinated progress of technology, data and supervision system.</p>
</sec>
</body>
<back>
<sec sec-type="author-contributions" id="sec11">
<title>Author contributions</title>
<p>PZ: Writing &#x2013; review &#x0026; editing, Supervision, Writing &#x2013; original draft. ZY: Writing &#x2013; original draft.</p>
</sec>
<sec sec-type="COI-statement" id="sec12">
<title>Conflict of interest</title>
<p>The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="ai-statement" id="sec13">
<title>Generative AI statement</title>
<p>The author(s) declared that Generative AI was not used in the creation of this manuscript.</p>
<p>Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.</p>
</sec>
<sec sec-type="disclaimer" id="sec14">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="ref1"><label>1.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Stember</surname><given-names>JN</given-names></name> <name><surname>Shalu</surname><given-names>H</given-names></name></person-group>. <article-title>Reinforcement learning using deep Q networks and Q learning accurately localizes brain tumors on MRI with very small training sets</article-title>. <source>BMC Med Imaging</source>. (<year>2022</year>) <volume>22</volume>:<fpage>224</fpage>. doi: <pub-id pub-id-type="doi">10.1186/s12880-022-00919-x</pub-id>, <pub-id pub-id-type="pmid">36564724</pub-id></mixed-citation></ref>
<ref id="ref2"><label>2.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Brown</surname><given-names>ED</given-names></name> <name><surname>Pelcher</surname><given-names>I</given-names></name> <name><surname>Leon</surname><given-names>S</given-names></name> <name><surname>Karkare</surname><given-names>AN</given-names></name> <name><surname>Barbero</surname><given-names>JA</given-names></name> <name><surname>Ward</surname><given-names>M</given-names></name> <etal/></person-group>. <article-title>Artificial intelligence applications in the screening and classification of glioblastoma</article-title>. <source>J Neurosurg Sci</source>. (<year>2025</year>) <volume>69</volume>:<fpage>362</fpage>&#x2013;<lpage>70</lpage>. doi: <pub-id pub-id-type="doi">10.23736/S0390-5616.25.06502-6</pub-id>, <pub-id pub-id-type="pmid">40662247</pub-id></mixed-citation></ref>
<ref id="ref3"><label>3.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Yuan</surname><given-names>M</given-names></name> <name><surname>Ding</surname><given-names>H</given-names></name> <name><surname>Guo</surname><given-names>B</given-names></name> <name><surname>Yang</surname><given-names>M</given-names></name> <name><surname>Yang</surname><given-names>Y</given-names></name> <name><surname>Xu</surname><given-names>XS</given-names></name></person-group>. <article-title>Image-based subtype classification for glioblastoma using deep learning: prognostic significance and biologic relevance</article-title>. <source>JCO Clin Cancer Inform</source>. (<year>2024</year>) <volume>8</volume>:<fpage>e2300154</fpage>. doi: <pub-id pub-id-type="doi">10.1200/CCI.23.00154</pub-id></mixed-citation></ref>
<ref id="ref4"><label>4.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Missaoui</surname><given-names>R</given-names></name> <name><surname>Hechkel</surname><given-names>W</given-names></name> <name><surname>Saadaoui</surname><given-names>W</given-names></name> <name><surname>Helali</surname><given-names>A</given-names></name> <name><surname>Leo</surname><given-names>M</given-names></name></person-group>. <article-title>Advanced deep learning and machine learning techniques for MRI brain tumor analysis: a review</article-title>. <source>Sensors</source>. (<year>2025</year>) <volume>25</volume>:<fpage>2746</fpage>. doi: <pub-id pub-id-type="doi">10.3390/s25092746</pub-id>, <pub-id pub-id-type="pmid">40363185</pub-id></mixed-citation></ref>
<ref id="ref5"><label>5.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Zhou</surname><given-names>Q</given-names></name> <name><surname>Tian</surname><given-names>X</given-names></name> <name><surname>Feng</surname><given-names>M</given-names></name> <name><surname>Li</surname><given-names>L</given-names></name> <name><surname>Zheng</surname><given-names>D</given-names></name> <name><surname>Li</surname><given-names>X</given-names></name></person-group>. <article-title>Transformer-based deep learning for predicting brain tumor recurrence using magnetic resonance imaging</article-title>. <source>Med Phys</source>. (<year>2025</year>) <volume>52</volume>:<fpage>e70016</fpage>. doi: <pub-id pub-id-type="doi">10.1002/mp.70016</pub-id>, <pub-id pub-id-type="pmid">40996365</pub-id></mixed-citation></ref>
<ref id="ref6"><label>6.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Nalepa</surname><given-names>J</given-names></name> <name><surname>Ribalta Lorenzo</surname><given-names>P</given-names></name> <name><surname>Marcinkiewicz</surname><given-names>M</given-names></name> <name><surname>Bobek-Billewicz</surname><given-names>B</given-names></name> <name><surname>Wawrzyniak</surname><given-names>P</given-names></name> <name><surname>Walczak</surname><given-names>M</given-names></name> <etal/></person-group>. <article-title>Fully-automated deep learning-powered system for DCE-MRI analysis of brain tumors</article-title>. <source>Artif Intell Med</source>. (<year>2019</year>) <volume>102</volume>:<fpage>101769</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.artmed.2019.101769</pub-id></mixed-citation></ref>
<ref id="ref7"><label>7.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Tangsrivimol</surname><given-names>JA</given-names></name> <name><surname>Schonfeld</surname><given-names>E</given-names></name> <name><surname>Zhang</surname><given-names>M</given-names></name> <name><surname>Veeravagu</surname><given-names>A</given-names></name> <name><surname>Smith</surname><given-names>TR</given-names></name> <name><surname>H&#x00E4;rtl</surname><given-names>R</given-names></name> <etal/></person-group>. <article-title>Artificial intelligence in neurosurgery: a state-of-the-art review from past to future</article-title>. <source>Diagnostics</source>. (<year>2023</year>) <volume>13</volume>:<fpage>2429</fpage>. doi: <pub-id pub-id-type="doi">10.3390/diagnostics13142429</pub-id>, <pub-id pub-id-type="pmid">37510174</pub-id></mixed-citation></ref>
<ref id="ref8"><label>8.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Chou</surname><given-names>CJ</given-names></name> <name><surname>Yang</surname><given-names>HC</given-names></name> <name><surname>Chang</surname><given-names>PY</given-names></name> <name><surname>Chen</surname><given-names>C-J</given-names></name> <name><surname>Wu</surname><given-names>H-M</given-names></name> <name><surname>Lin</surname><given-names>C-F</given-names></name> <etal/></person-group>. <article-title>Automated identification and quantification of metastatic brain tumors and perilesional edema based on a deep learning neural network</article-title>. <source>J Neuro-Oncol</source>. (<year>2023</year>) <volume>166</volume>:<fpage>167</fpage>&#x2013;<lpage>74</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s11060-023-04540-y</pub-id>, <pub-id pub-id-type="pmid">38133789</pub-id></mixed-citation></ref>
<ref id="ref9"><label>9.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ranjbarzadeh</surname><given-names>R</given-names></name> <name><surname>Caputo</surname><given-names>A</given-names></name> <name><surname>Tirkolaee</surname><given-names>EB</given-names></name> <name><surname>Jafarzadeh Ghoushchi</surname><given-names>A</given-names></name> <name><surname>Bendechache</surname><given-names>M</given-names></name></person-group>. <article-title>Brain tumor segmentation of MRI images: a comprehensive review on the application of artificial intelligence tools</article-title>. <source>Comput Biol Med</source>. (<year>2022</year>) <volume>152</volume>:<fpage>106405</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.compbiomed.2022.106405</pub-id></mixed-citation></ref>
<ref id="ref10"><label>10.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Lim</surname><given-names>CC</given-names></name> <name><surname>Ling</surname><given-names>AHW</given-names></name> <name><surname>Chong</surname><given-names>YF</given-names></name> <name><surname>Mashor</surname><given-names>MY</given-names></name> <name><surname>Alshantti</surname><given-names>K</given-names></name> <name><surname>Aziz</surname><given-names>ME</given-names></name></person-group>. <article-title>Comparative analysis of image processing techniques for enhanced MRI image quality: 3D reconstruction and segmentation using 3D U-net architecture</article-title>. <source>Diagnostics</source>. (<year>2023</year>) <volume>13</volume>:<fpage>2377</fpage>. doi: <pub-id pub-id-type="doi">10.3390/diagnostics13142377</pub-id>, <pub-id pub-id-type="pmid">37510120</pub-id></mixed-citation></ref>
<ref id="ref11"><label>11.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wen</surname><given-names>J</given-names></name> <name><surname>Khan</surname><given-names>A</given-names></name> <name><surname>Chen</surname><given-names>A</given-names></name> <name><surname>Peng</surname><given-names>W</given-names></name> <name><surname>Fang</surname><given-names>M</given-names></name> <name><surname>Philip Chen</surname><given-names>CL</given-names></name> <etal/></person-group>. <article-title>High-quality fusion and visualization for MR-PET brain tumor images via multi-dimensional features</article-title>. <source>IEEE Trans Image Process</source>. (<year>2024</year>) <volume>33</volume>:<fpage>3550</fpage>&#x2013;<lpage>63</lpage>. doi: <pub-id pub-id-type="doi">10.1109/TIP.2024.3404660</pub-id>, <pub-id pub-id-type="pmid">38814770</pub-id></mixed-citation></ref>
<ref id="ref12"><label>12.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Huang</surname><given-names>J</given-names></name> <name><surname>Molleti</surname><given-names>P</given-names></name> <name><surname>Iv</surname><given-names>M</given-names></name> <name><surname>Lee</surname><given-names>R</given-names></name> <name><surname>Itakura</surname><given-names>H</given-names></name></person-group>. <article-title>Deep learning-based brain tumor segmentation on limited sequences of magnetic resonance imaging</article-title>. <source>J Clin Oncol</source>. (<year>2022</year>) <volume>40</volume>:<fpage>2054</fpage>&#x2013;<lpage>4</lpage>. doi: <pub-id pub-id-type="doi">10.1200/jco.2022.40.16_suppl.2054</pub-id></mixed-citation></ref>
<ref id="ref13"><label>13.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>van der Voort</surname><given-names>SR</given-names></name> <name><surname>Incekara</surname><given-names>F</given-names></name> <name><surname>Wijnenga</surname><given-names>MMJ</given-names></name> <name><surname>Kapsas</surname><given-names>G</given-names></name> <name><surname>Gahrmann</surname><given-names>R</given-names></name> <name><surname>Schouten</surname><given-names>JW</given-names></name> <etal/></person-group>. <article-title>Combined molecular subtyping, grading, and segmentation of glioma using multi-task deep learning</article-title>. <source>Neuro Oncol</source>. (<year>2023</year>) <volume>25</volume>:<fpage>279</fpage>&#x2013;<lpage>89</lpage>. doi: <pub-id pub-id-type="doi">10.1093/neuonc/noac166</pub-id>, <pub-id pub-id-type="pmid">35788352</pub-id></mixed-citation></ref>
<ref id="ref14"><label>14.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Albalawi</surname><given-names>E</given-names></name> <name><surname>Thakur</surname><given-names>A</given-names></name> <name><surname>Dorai</surname><given-names>DR</given-names></name> <name><surname>Bhatia Khan</surname><given-names>S</given-names></name> <name><surname>Mahesh</surname><given-names>TR</given-names></name> <name><surname>Almusharraf</surname><given-names>A</given-names></name> <etal/></person-group>. <article-title>Enhancing brain tumor classification in MRI scans with a multi-layer customized convolutional neural network approach</article-title>. <source>Front Comput Neurosci</source>. (<year>2024</year>) <volume>18</volume>:<fpage>1418546</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fncom.2024.1418546</pub-id>, <pub-id pub-id-type="pmid">38933391</pub-id></mixed-citation></ref>
<ref id="ref15"><label>15.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Reyes</surname><given-names>D</given-names></name> <name><surname>S&#x00E1;nchez</surname><given-names>J</given-names></name></person-group>. <article-title>Performance of convolutional neural networks for the classification of brain tumors using magnetic resonance imaging</article-title>. <source>Heliyon</source>. (<year>2024</year>) <volume>10</volume>:<fpage>e25468</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.heliyon.2024.e25468</pub-id>, <pub-id pub-id-type="pmid">38352765</pub-id></mixed-citation></ref>
<ref id="ref16"><label>16.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Md Ashafuddula</surname><given-names>NI</given-names></name> <name><surname>Islam</surname><given-names>R</given-names></name></person-group>. <article-title>ContourTL-net: contour-based transfer learning algorithm for early-stage brain tumor detection</article-title>. <source>Int J Biomed Imaging</source>. (<year>2024</year>) <volume>2024</volume>:<fpage>6347920</fpage>. doi: <pub-id pub-id-type="doi">10.1155/2024/6347920</pub-id></mixed-citation></ref>
<ref id="ref17"><label>17.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Hu</surname><given-names>Z</given-names></name> <name><surname>Sun</surname><given-names>Y</given-names></name> <name><surname>Bian</surname><given-names>L</given-names></name> <name><surname>Luo</surname><given-names>C</given-names></name> <name><surname>Zhu</surname><given-names>J</given-names></name> <name><surname>Zhu</surname><given-names>J</given-names></name> <etal/></person-group>. <article-title>UDA-GS: a cross-center multimodal unsupervised domain adaptation framework for glioma segmentation</article-title>. <source>Comput Biol Med</source>. (<year>2025</year>) <volume>185</volume>:<fpage>109472</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.compbiomed.2024.109472</pub-id>. <comment>Epub 2024 Dec 4</comment>, <pub-id pub-id-type="pmid">39637464</pub-id></mixed-citation></ref>
<ref id="ref18"><label>18.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Cleveland</surname><given-names>M</given-names></name> <name><surname>Kim</surname><given-names>A</given-names></name> <name><surname>Patel</surname><given-names>J</given-names></name> <name><surname>McCall</surname><given-names>O</given-names></name> <name><surname>Liu</surname><given-names>W</given-names></name> <name><surname>Ahmed</surname><given-names>S</given-names></name> <etal/></person-group>. <article-title>NIMG-76. A deep learning algorithm for fully automated volumetric measurement of meningioma burden</article-title>. <source>Neuro Oncol</source>. (<year>2023</year>) <volume>25</volume>:<fpage>v203</fpage>&#x2013;<lpage>3</lpage>. doi: <pub-id pub-id-type="doi">10.1093/neuonc/noad179.0771</pub-id></mixed-citation></ref>
<ref id="ref19"><label>19.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Richter</surname><given-names>S</given-names></name> <name><surname>Dremel</surname><given-names>J</given-names></name> <name><surname>Wang</surname><given-names>T</given-names></name> <name><surname>Ey&#x00FC;poglu</surname><given-names>I</given-names></name> <name><surname>Czarske</surname><given-names>J</given-names></name> <name><surname>Kirsche</surname><given-names>K</given-names></name> <etal/></person-group>. <article-title>OS03.7.A Autofluorescence based recognition of brain tumors with a convolutional neural network</article-title>. <source>Neuro Oncol</source>. (<year>2023</year>) <volume>25</volume>:<fpage>ii15</fpage>&#x2013;<lpage>6</lpage>. doi: <pub-id pub-id-type="doi">10.1093/neuonc/noad137.044</pub-id></mixed-citation></ref>
<ref id="ref20"><label>20.</label><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Kazerooni</surname><given-names>A</given-names></name> <name><surname>Madhogarhia</surname><given-names>R</given-names></name> <name><surname>Arif</surname><given-names>S</given-names></name> <name><surname>Ware</surname><given-names>J</given-names></name> <name><surname>Bagheri</surname><given-names>S</given-names></name> <name><surname>Haldar</surname><given-names>D</given-names></name> <etal/></person-group>. <article-title>NIMG-102. RAPNO-defined segmentation and volumetric assessment of pediatric brain tumors on multi-parametric MRI scans using deep learning; a robust tool with potential application in tumor response assessment neuro-oncology</article-title> (<year>2022</year>) <volume>24</volume>:<fpage>vii188</fpage>&#x2013;<lpage>9</lpage>. doi: <pub-id pub-id-type="doi">10.1093/neuonc/noac209.720</pub-id>,</mixed-citation></ref>
<ref id="ref21"><label>21.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Mandal</surname><given-names>S</given-names></name> <name><surname>Chakraborty</surname><given-names>S</given-names></name> <name><surname>Tariq</surname><given-names>MA</given-names></name> <name><surname>Ali</surname><given-names>K</given-names></name> <name><surname>Elavia</surname><given-names>Z</given-names></name> <name><surname>Khan</surname><given-names>MK</given-names></name> <etal/></person-group>. <article-title>Artificial intelligence and deep learning in revolutionizing brain tumor diagnosis and treatment: a narrative review</article-title>. <source>Cureus</source>. (<year>2024</year>) <volume>16</volume>:<fpage>e66157</fpage>. doi: <pub-id pub-id-type="doi">10.7759/cureus.66157</pub-id>, <pub-id pub-id-type="pmid">39233936</pub-id></mixed-citation></ref>
<ref id="ref22"><label>22.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ali</surname><given-names>RR</given-names></name> <name><surname>Yaacob</surname><given-names>NM</given-names></name> <name><surname>Alqaryouti</surname><given-names>MH</given-names></name> <name><surname>Sadeq</surname><given-names>AE</given-names></name> <name><surname>Doheir</surname><given-names>M</given-names></name> <name><surname>Iqtait</surname><given-names>M</given-names></name> <etal/></person-group>. <article-title>Learning architecture for brain tumor classification based on deep convolutional neural network: classic and ResNet50</article-title>. <source>Diagnostics</source>. (<year>2025</year>) <volume>15</volume>:<fpage>624</fpage>. doi: <pub-id pub-id-type="doi">10.3390/diagnostics15050624</pub-id>, <pub-id pub-id-type="pmid">40075870</pub-id></mixed-citation></ref>
<ref id="ref23"><label>23.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kabir Anaraki</surname><given-names>A</given-names></name> <name><surname>Ayati</surname><given-names>M</given-names></name> <name><surname>Kazemi</surname><given-names>F</given-names></name></person-group>. <article-title>Magnetic resonance imaging-based brain tumor grades classification and grading via convolutional neural networks and genetic algorithms</article-title>. <source>Biocybern Biomed Eng</source>. (<year>2019</year>) <volume>39</volume>:<fpage>63</fpage>&#x2013;<lpage>74</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.bbe.2018.10.004</pub-id></mixed-citation></ref>
<ref id="ref24"><label>24.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Al-Khuzaie</surname><given-names>M</given-names></name> <name><surname>Al-Jawher</surname><given-names>W</given-names></name></person-group>. <article-title>Enhancing brain tumor classification with a novel three-dimensional convolutional neural network (3D-CNN) fusion model</article-title>. <source>J Port Sci Res</source>. (<year>2024</year>) <volume>7</volume>:<fpage>254</fpage>&#x2013;<lpage>67</lpage>. doi: <pub-id pub-id-type="doi">10.36371/port.2024.3.5</pub-id></mixed-citation></ref>
<ref id="ref25"><label>25.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Mzoughi</surname><given-names>H</given-names></name> <name><surname>Njeh</surname><given-names>I</given-names></name> <name><surname>Wali</surname><given-names>A</given-names></name> <name><surname>Slima</surname><given-names>MB</given-names></name> <name><surname>BenHamida</surname><given-names>A</given-names></name> <name><surname>Mhiri</surname><given-names>C</given-names></name> <etal/></person-group>. <article-title>Deep multi-scale 3D convolutional neural network (CNN) for MRI gliomas brain tumor classification</article-title>. <source>J Digit Imaging</source>. (<year>2020</year>) <volume>33</volume>:<fpage>903</fpage>&#x2013;<lpage>15</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s10278-020-00347-9</pub-id>, <pub-id pub-id-type="pmid">32440926</pub-id></mixed-citation></ref>
<ref id="ref26"><label>26.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Yamashiro</surname><given-names>H</given-names></name> <name><surname>Teramoto</surname><given-names>A</given-names></name> <name><surname>Saito</surname><given-names>K</given-names></name> <name><surname>Fujita</surname><given-names>H</given-names></name></person-group>. <article-title>Development of a fully automated glioma-grading pipeline using post-contrast T1-weighted images combined with cloud-based 3D convolutional neural network</article-title>. <source>Appl Sci</source>. (<year>2021</year>) <volume>11</volume>:<fpage>5118</fpage>. doi: <pub-id pub-id-type="doi">10.3390/app11115118</pub-id></mixed-citation></ref>
<ref id="ref27"><label>27.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Zhuge</surname><given-names>Y</given-names></name> <name><surname>Ning</surname><given-names>H</given-names></name> <name><surname>Mathen</surname><given-names>P</given-names></name> <name><surname>Cheng</surname><given-names>JY</given-names></name> <name><surname>Krauze</surname><given-names>AV</given-names></name> <name><surname>Camphausen</surname><given-names>K</given-names></name> <etal/></person-group>. <article-title>Automated glioma grading on conventional MRI images using deep convolutional neural networks</article-title>. <source>Med Phys</source>. (<year>2020</year>) <volume>47</volume>:<fpage>3044</fpage>&#x2013;<lpage>53</lpage>. doi: <pub-id pub-id-type="doi">10.1002/mp.14168</pub-id>, <pub-id pub-id-type="pmid">32277478</pub-id></mixed-citation></ref>
<ref id="ref28"><label>28.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Liang</surname><given-names>X</given-names></name> <name><surname>Tan</surname><given-names>S</given-names></name> <name><surname>Chen</surname><given-names>Y</given-names></name> <name><surname>Wei</surname><given-names>C</given-names></name> <name><surname>Qin</surname><given-names>Z</given-names></name></person-group>. <article-title>Bioinformatics exploration of SPHKAP'S role in IDH-mutant glioma involving energy metabolism, prognosis, and immune modulation</article-title>. <source>J Neuroimmunol</source>. (<year>2025</year>) <volume>402</volume>:<fpage>578570</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.jneuroim.2025.578570</pub-id>, <pub-id pub-id-type="pmid">40058165</pub-id></mixed-citation></ref>
<ref id="ref29"><label>29.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ahmad</surname><given-names>O</given-names></name> <name><surname>Ahmad</surname><given-names>T</given-names></name> <name><surname>Pfister</surname><given-names>SM</given-names></name></person-group>. <article-title>IDH mutation, glioma immunogenicity, and therapeutic challenge of primary mismatch repair deficient IDH-mutant astrocytoma PMMRDIA: a systematic review</article-title>. <source>Mol Oncol</source>. (<year>2024</year>) <volume>18</volume>:<fpage>2822</fpage>&#x2013;<lpage>41</lpage>. doi: <pub-id pub-id-type="doi">10.1002/1878-0261.13598</pub-id>, <pub-id pub-id-type="pmid">38339779</pub-id></mixed-citation></ref>
<ref id="ref30"><label>30.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Decuyper</surname><given-names>M</given-names></name> <name><surname>Bonte</surname><given-names>S</given-names></name> <name><surname>Deblaere</surname><given-names>K</given-names></name> <name><surname>Van Holen</surname><given-names>R</given-names></name></person-group>. <article-title>Automated MRI based pipeline for segmentation and prediction of grade, IDH mutation and 1p19q co-deletion in glioma</article-title>. <source>Comput Med Imaging Graph</source>. (<year>2020</year>) <volume>88</volume>:<fpage>101831</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.compmedimag.2020.101831</pub-id></mixed-citation></ref>
<ref id="ref31"><label>31.</label><mixed-citation publication-type="other"><person-group person-group-type="author"><name><surname>Moassefi</surname><given-names>M</given-names></name> <name><surname>Conte</surname><given-names>G</given-names></name> <name><surname>Decker</surname><given-names>P</given-names></name> <name><surname>Kosel</surname><given-names>M</given-names></name> <name><surname>Ruff</surname><given-names>M</given-names></name> <name><surname>Burns</surname><given-names>T</given-names></name> <etal/></person-group>. <article-title>IMG-87. Differentiation of IDH-wildtype glioblastoma and primary central nervous system lymphoma using 3D deep learning on MRI neuro-oncology</article-title> (<year>2025</year>) <volume>27</volume>:<fpage>v294</fpage>&#x2013;<lpage>4</lpage>. doi: <pub-id pub-id-type="doi">10.1093/neuonc/noaf201.1166</pub-id>,</mixed-citation></ref>
<ref id="ref32"><label>32.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Das</surname><given-names>S</given-names></name> <name><surname>Nayak</surname><given-names>GK</given-names></name> <name><surname>Saba</surname><given-names>L</given-names></name> <name><surname>Kalra</surname><given-names>M</given-names></name> <name><surname>Suri</surname><given-names>JS</given-names></name> <name><surname>Saxena</surname><given-names>S</given-names></name></person-group>. <article-title>An artificial intelligence framework and its bias for brain tumor segmentation: a narrative review</article-title>. <source>Comput Biol Med</source>. (<year>2022</year>) <volume>143</volume>:<fpage>105273</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.compbiomed.2022.105273</pub-id>, <pub-id pub-id-type="pmid">35228172</pub-id></mixed-citation></ref>
<ref id="ref33"><label>33.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Khalighi</surname><given-names>S</given-names></name> <name><surname>Reddy</surname><given-names>K</given-names></name> <name><surname>Midya</surname><given-names>A</given-names></name> <name><surname>Pandav</surname><given-names>KB</given-names></name> <name><surname>Madabhushi</surname><given-names>A</given-names></name> <name><surname>Abedalthagafi</surname><given-names>M</given-names></name></person-group>. <article-title>Artificial intelligence in neuro-oncology: advances and challenges in brain tumor diagnosis, prognosis, and precision treatment</article-title>. <source>NPJ Precis Oncol</source>. (<year>2024</year>) <volume>8</volume>:<fpage>80</fpage>. doi: <pub-id pub-id-type="doi">10.1038/s41698-024-00575-0</pub-id>, <pub-id pub-id-type="pmid">38553633</pub-id></mixed-citation></ref>
<ref id="ref34"><label>34.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname><given-names>P</given-names></name> <name><surname>Chen</surname><given-names>Y</given-names></name> <name><surname>Zhao</surname><given-names>J</given-names></name> <name><surname>Zheng</surname><given-names>N</given-names></name> <name><surname>Hu</surname><given-names>Y</given-names></name> <name><surname>Chao</surname><given-names>T</given-names></name> <etal/></person-group>. <article-title>Differentiation of postoperative tumor recurrence and pseudoprogression in gliomas: a comparative study of six diffusion models</article-title>. <source>Acad Radiol</source>. (<year>2025</year>) <volume>32</volume>:<fpage>6181</fpage>&#x2013;<lpage>93</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.acra.2025.07.036</pub-id>, <pub-id pub-id-type="pmid">40774877</pub-id></mixed-citation></ref>
<ref id="ref35"><label>35.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Reddy</surname><given-names>S</given-names></name> <name><surname>Lung</surname><given-names>T</given-names></name> <name><surname>Muniyappa</surname><given-names>S</given-names></name> <name><surname>Hadley</surname><given-names>C</given-names></name> <name><surname>Templeton</surname><given-names>B</given-names></name> <name><surname>Fritz</surname><given-names>J</given-names></name> <etal/></person-group>. <article-title>Radiomics and radiogenomics in differentiating progression, pseudoprogression, and radiation necrosis in gliomas</article-title>. <source>Biomedicine</source>. (<year>2025</year>) <volume>13</volume>:<fpage>1778</fpage>. doi: <pub-id pub-id-type="doi">10.3390/biomedicines13071778</pub-id>, <pub-id pub-id-type="pmid">40722849</pub-id></mixed-citation></ref>
<ref id="ref36"><label>36.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ying</surname><given-names>YZ</given-names></name> <name><surname>Cai</surname><given-names>XH</given-names></name> <name><surname>Yang</surname><given-names>H</given-names></name> <name><surname>Huang</surname><given-names>HW</given-names></name> <name><surname>Zheng</surname><given-names>D</given-names></name> <name><surname>Li</surname><given-names>HY</given-names></name> <etal/></person-group>. <article-title>Development and validation of a deep learning algorithm for discriminating glioma recurrence from radiation necrosis on MRI</article-title>. <source>Front Oncol</source>. (<year>2025</year>) <volume>15</volume>:<fpage>1573700</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fonc.2025.1573700</pub-id>, <pub-id pub-id-type="pmid">40548110</pub-id></mixed-citation></ref>
<ref id="ref37"><label>37.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ling</surname><given-names>A</given-names></name> <name><surname>Bernstock</surname><given-names>J</given-names></name> <name><surname>Torio</surname><given-names>E</given-names></name> <name><surname>Shono</surname><given-names>N</given-names></name> <name><surname>Liu</surname><given-names>J</given-names></name> <name><surname>Landivar</surname><given-names>A</given-names></name> <etal/></person-group>. <article-title>Abstract 3683: AI assisted MRI volumetrics improve recurrent glioblastoma patient stratification following immunotherapy treatment</article-title>. <source>Cancer Res</source>. (<year>2025</year>) <volume>85</volume>:<fpage>3683</fpage>. doi: <pub-id pub-id-type="doi">10.1158/1538-7445.AM2025-3683</pub-id></mixed-citation></ref>
<ref id="ref38"><label>38.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Muhammad</surname><given-names>K</given-names></name> <name><surname>Khan</surname><given-names>S</given-names></name> <name><surname>Ser</surname><given-names>JD</given-names></name> <name><surname>Albuquerque</surname><given-names>Victor Hugo C.</given-names><prefix>de</prefix></name></person-group> <article-title>Deep learning for multigrade brain tumor classification in smart healthcare systems: a prospective survey</article-title> <source>IEEE Trans Neural Netw Learn Syst</source> <year>2021</year> <volume>32</volume>  <fpage>507</fpage>&#x2013;<lpage>522</lpage> doi: <pub-id pub-id-type="doi">10.1109/TNNLS.2020.2995800</pub-id>, <pub-id pub-id-type="pmid">32603291</pub-id></mixed-citation></ref>
<ref id="ref39"><label>39.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Hosseini</surname><given-names>H</given-names></name> <name><surname>Cheng</surname><given-names>C</given-names></name> <name><surname>Glass</surname><given-names>J</given-names></name> <name><surname>Reddick</surname><given-names>G</given-names></name> <name><surname>Gajjar</surname><given-names>A</given-names></name> <name><surname>Lu</surname><given-names>Z</given-names></name></person-group>. <article-title>Abstract PO-077: image clustering of brain tumor patients using a deep neural network</article-title>. <source>Clin Cancer Res</source>. (<year>2021</year>) <volume>27</volume>:<fpage>PO-077-PO-077</fpage>. doi: <pub-id pub-id-type="doi">10.1158/1557-3265.adi21-po-077</pub-id></mixed-citation></ref>
<ref id="ref40"><label>40.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Free</surname><given-names>T</given-names></name> <name><surname>Moore</surname><given-names>J</given-names></name> <name><surname>Taras</surname><given-names>M</given-names></name> <name><surname>Ganapathy</surname><given-names>S</given-names></name> <name><surname>Sreedher</surname><given-names>G</given-names></name> <name><surname>Wright</surname><given-names>E</given-names></name></person-group>. <article-title>IMG-05. Using convolutional neural networks to identify common molecular alterations in pediatric BRAIN tumors</article-title>. <source>Neuro-Oncology</source>. (<year>2024</year>) <volume>26</volume>:<fpage>01</fpage>&#x2013;<lpage>06</lpage>. doi: <pub-id pub-id-type="doi">10.1093/neuonc/noae064.342</pub-id></mixed-citation></ref>
<ref id="ref41"><label>41.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Mahmutoglu</surname><given-names>MA</given-names></name> <name><surname>Preetha</surname><given-names>CJ</given-names></name> <name><surname>Meredig</surname><given-names>H</given-names></name> <name><surname>Tonn</surname><given-names>JC</given-names></name> <name><surname>Weller</surname><given-names>M</given-names></name> <name><surname>Wick</surname><given-names>W</given-names></name> <etal/></person-group>. <article-title>Deep learning-based identification of brain MRI sequences using a model trained on large multicentric study cohorts</article-title>. <source>Radiol Artif Intell</source>. (<year>2024</year>) <volume>6</volume>:<fpage>e230095</fpage>. doi: <pub-id pub-id-type="doi">10.1148/ryai.230095</pub-id>, <pub-id pub-id-type="pmid">38166331</pub-id></mixed-citation></ref>
<ref id="ref42"><label>42.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Samani</surname><given-names>Z</given-names></name> <name><surname>Parker</surname><given-names>D</given-names></name> <name><surname>Wolf</surname><given-names>R</given-names></name> <name><surname>Brem</surname><given-names>S</given-names></name> <name><surname>Verma</surname><given-names>R</given-names></name></person-group>. <article-title>BRMP-04. AI-based biomarker of the peritumoral region using tissue microstructure</article-title>. <source>Neuro-Oncology</source>. (<year>2021</year>) <volume>23</volume>:<fpage>vi223</fpage>&#x2013;<lpage>4</lpage>. doi: <pub-id pub-id-type="doi">10.1093/neuonc/noab196.897</pub-id></mixed-citation></ref>
<ref id="ref43"><label>43.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sachdeva</surname><given-names>J</given-names></name> <name><surname>Sharma</surname><given-names>D</given-names></name> <name><surname>Ahuja</surname><given-names>C</given-names></name></person-group>. <article-title>Comparative analysis of different deep convolutional neural network architectures for classification of brain tumor on magnetic resonance images</article-title>. <source>Arch Comput Methods Eng</source>. (<year>2024</year>) <volume>31</volume>:<fpage>1959</fpage>&#x2013;<lpage>78</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s11831-023-10041-y</pub-id></mixed-citation></ref>
<ref id="ref44"><label>44.</label><mixed-citation publication-type="journal"><article-title>Using AI to improve the molecular classification of brain tumors</article-title>. <source>Nat Med</source>. (<year>2023</year>) <volume>29</volume>:<fpage>793</fpage>&#x2013;<lpage>4</lpage>. doi: <pub-id pub-id-type="doi">10.1038/s41591-023-02298-4</pub-id></mixed-citation></ref>
</ref-list>
<fn-group>
<fn fn-type="custom" custom-type="edited-by" id="fn0001">
<p>Edited by: <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/3062392/overview">Dieine Estela Bernieri Schiavon</ext-link>, Federal University of Health Sciences of Porto Alegre, Brazil</p>
</fn>
<fn fn-type="custom" custom-type="reviewed-by" id="fn0002">
<p>Reviewed by: <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/3101998/overview">S. Rajeshkannan</ext-link>, St. Joseph&#x2019;s College of Engineering, India</p>
</fn>
</fn-group>
</back>
</article>