AUTHOR=Zhang Chao , Liu Ya , Wu Xiaopei TITLE=TFANet: a temporal fusion attention neural network for motor imagery decoding JOURNAL=Frontiers in Neuroscience VOLUME=Volume 19 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2025.1635588 DOI=10.3389/fnins.2025.1635588 ISSN=1662-453X ABSTRACT=IntroductionIn the field of brain-computer interfaces (BCI), motor imagery (MI) classification is a critically important task, with the primary objective of decoding an individual's MI intentions from electroencephalogram (EEG) signals. However, MI decoding faces significant challenges, primarily due to the inherent complex temporal dependencies of EEG signals.MethodsThis paper proposes a temporal fusion attention network (TFANet), which aims to improve the decoding performance of MI tasks by accurately modeling the temporal dependencies in EEG signals. TFANet introduces a multi-scale temporal self-attention (MSTSA) mechanism that captures temporal variation in EEG signals across different time scales, enabling the model to capture both local and global features. Moreover, the model adaptively adjusts the channel weights through a channel attention module, allowing it to focus on key signals related to motor imagery. This further enhances the utilization of temporal features. Moreover, by integrating the temporal depthwise separable convolution fusion network (TDSCFN) module, TFANet reduces computational burden while enhancing the ability to capture temporal patterns.ResultsThe proposed method achieves a within-subject classification accuracy of 84.92% and 88.41% on the BCIC-IV-2a and BCIC-IV-2b datasets, respectively. Furthermore, using a transfer learning approach on the BCIC-IV-2a dataset, a cross-subject classification accuracy of 77.2% is attained.ConclusionThese results demonstrate that TFANet is an effective approach for decoding MI tasks with complex temporal dependencies.