AUTHOR=Wu Jiasong , Qiu Xiang , Zhang Jing , Wu Fuzhi , Kong Youyong , Yang Guanyu , Senhadji Lotfi , Shu Huazhong TITLE=Fractional Wavelet-Based Generative Scattering Networks JOURNAL=Frontiers in Neurorobotics VOLUME=Volume 15 - 2021 YEAR=2021 URL=https://www.frontiersin.org/journals/neurorobotics/articles/10.3389/fnbot.2021.752752 DOI=10.3389/fnbot.2021.752752 ISSN=1662-5218 ABSTRACT=Generative adversarial networks (GANs) and variational auto-encoders (VAEs) provide impressive image generation from Gaussian white noise, but both are difficult to train since they need the generator (or encoder) and the discriminator (or decoder) to be trained simultaneously, which can easily lead to unstable training. To solve or alleviate these synchronous training problems of GANs and VAEs, researchers recently proposed generative scattering networks (GSNs), which use wavelet scattering networks (ScatNets) as the encoder to obtain the features (or ScatNet embeddings) and convolutional neural networks (CNNs) as the decoder to generate the image. The advantage of GSNs is that the parameters of ScatNets do not need to be learned, and the disadvantage of GSNs is that the ability to obtain representations of ScatNets is slightly weaker than CNNs; in addition, the dimensionality reduction method of principal component analysis (PCA) can easily lead to overfitting in the training of GSNs, and therefore affect the generated image quality in the testing process. To further improve the quality of the generated images while keeping the advantages of GSNs, this paper proposes generative fractional scattering networks (GFRSNs), which use more expressive fractional wavelet scattering networks (FrScatNets) instead of ScatNets as the encoder to obtain the features (or FrScatNet embeddings) and use similar CNNs of GSNs as the decoder to generate the image. Additionally, this paper develops a new dimensionality reduction method named feature-map fusion (FMF) instead of PCA to better retain the information of FrScatNets, and it also discusses the effect of image fusion on the quality of the image generation. The experimental results obtained on the CIFAR-10 dataset and the CelebA dataset show that the proposed GFRSNs can lead to better generated images than the original GSNs on the testing dataset. The experimental results of the proposed GFRSNs with deep convolutional GAN (DCGAN), progressive GAN (PGAN), and CycleGAN are also given.