AUTHOR=Roy Abhijit , Gourisaria Mahendra Kumar , Chatterjee Rajdeep , Jha Amitkumar V. , Appasani Bhargav , Bizon Nicu , Mazare Alin Gheorghita TITLE=A privacy-preserving, on-board satellite image classification technique incorporating homomorphic encryption and transfer learning JOURNAL=Frontiers in Remote Sensing VOLUME=Volume 6 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/remote-sensing/articles/10.3389/frsen.2025.1678882 DOI=10.3389/frsen.2025.1678882 ISSN=2673-6187 ABSTRACT=Satellite image classification is an important and challenging task in the modern technological age. Satellites can capture images of danger-prone areas with very little effort. However, the size and number of satellite images are very high when they are rapidly captured from space, and they require a huge amount of memory to store the data. In addition, keeping the satellite images private is another important task for security purposes. On-board, instant, accurate classification of a smaller number of satellite images is a challenging task, which is important to determine the specific condition of an area for instant monitoring. In the proposed hybrid approach, the captured images are kept secure, while the required training of the classification is done separately. Finally, the trained module is encrypted for use by the satellite to perform the on-board classification task. The Brakerski–Fan–Vercauteren (BFV)-based homomorphic encryption of EuroSAT satellite images is applied to store images in a cloud storage, where the privacy of the images can be maintained. Then, the decrypted images are used for training four transfer learning models (YOLOv8, YOLOv12, ResNet34, ResNet101, and a vision transformer classification. The best-trained module is encoded and encrypted again by using homomorphic encryption to limit the module to authorized devices. The encrypted module is decrypted and decoded to recover the trained module, which is used for instant classification of test images. Finally, the performance of the transfer learning models is evaluated from the test results. The vision transformer classifier achieved the highest accuracy of 99.65%.