AUTHOR=Zhao Kai , Lu Ruitao , Wang Siyu , Yang Xiaogang , Li Qingge , Fan Jiwei TITLE=ST-YOLOA: a Swin-transformer-based YOLO model with an attention mechanism for SAR ship detection under complex background JOURNAL=Frontiers in Neurorobotics VOLUME=Volume 17 - 2023 YEAR=2023 URL=https://www.frontiersin.org/journals/neurorobotics/articles/10.3389/fnbot.2023.1170163 DOI=10.3389/fnbot.2023.1170163 ISSN=1662-5218 ABSTRACT=In the field of computer vision, a synthetic aperture radar (SAR) image is crucial for ship detection. Due to the background clutter, pose variations, and scale changes, it is a challenge to construct a SAR ship detection model with low false-alarm rates and high accuracy. Therefore, in this paper, we propose a novel SAR ship detection model called ST-YOLOA. First, to enhance the feature extraction performance and capture global information, the Swin Transformer network architecture and coordinate attention (CA) model are embedded in the STCNet backbone network. Second, the PANet path aggregation network with a residual structure is used to construct the feature pyramid for increasing the global feature extraction capability. Next, to cope with the local interference and semantic information loss problems, a novel up/down-sampling method is proposed. Finally, to improve convergence speed and detection accuracy, the decoupled detection head is used to achieve the predicted output of the target position and the boundary box. To demonstrate the efficiency of the proposed method, we have constructed two SAR ship detection datasets: a norm test set (NTS) and a complex test set (CTS). The experimental results show that our ST-YOLOA achieved an accuracy of 97.37% and 75.69% on the two datasets, respectively, which is superior to the results of other state-of-the-art methods. Especially, our ST-YOLOA performs favorably in complex scenarios and the accuracy is 4.83% higher than that of YOLOX on the CTS. Moreover, ST-YOLOA achieves real-time detection with a speed of 21.4 FPS.