AUTHOR=Haq Ihtisham Ul , Iqbal Abid , Anas Muhammad , Masood Fahad , Alzahrani Ali S. , Al-Naeem Mohammed TITLE=GAME-Net: an ensemble deep learning framework integrating Generative Autoencoders and attention mechanisms for automated brain tumor segmentation in MRI JOURNAL=Frontiers in Computational Neuroscience VOLUME=Volume 19 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2025.1702902 DOI=10.3389/fncom.2025.1702902 ISSN=1662-5188 ABSTRACT=IntroductionAccurate and early identification of brain tumors is essential for improving therapeutic planning and clinical outcomes. Manual segmentation of Magnetic Resonance Imaging (MRI) remains time-consuming and subject to inter-observer variability. Computational models that combine Artificial Intelligence and biomedical imaging offer a pathway toward objective and efficient tumor delineation. The present study introduces a deep learning framework designed to enhance brain tumor segmentation performance.MethodsA comprehensive ensemble architecture was developed by integrating Generative Autoencoders with Attention Mechanisms (GAME), Convolutional Neural Networks, and attention-augmented U-Net segmentation modules. The dataset comprised 5,880 MRI images sourced from the BraTS 2023 benchmark distribution accessed via Kaggle, partitioned into training, validation, and testing subsets. Preprocessing included intensity normalization, augmentation, and unsupervised feature extraction. Tumor segmentation employed an attention-based U-Net, while tumor classification utilized a CNN coupled with Transformer-style self-attention. The Generative Autoencoder performed unsupervised representation learning to refine feature separability and enhance robustness to MRI variability.ResultsThe proposed framework achieved notable performance improvements across multiple evaluation metrics. The segmentation module produced a Dice Coefficient of 0.85 and a Jaccard Index of 0.78. The classification component yielded an accuracy of 87.18 percent, sensitivity of 88.3 percent, specificity of 86.5 percent, and an AUC-ROC of 0.91. The combined use of generative modeling, attention mechanisms, and ensemble learning improved tumor localization, boundary delineation, and false positive suppression compared with conventional architectures.DiscussionThe findings indicate that enriched representation learning and attention-driven feature refinement substantially elevate segmentation accuracy on heterogeneous MRI data. The integration of unsupervised learning within the pipeline supported improved generalization across variable imaging conditions. The demonstrated performance suggests strong potential for clinical utility, although broader validation across external datasets is recommended to further substantiate generalizability.