AUTHOR=Xue Jiao , Wang Chundong TITLE=EF-Feddr: communication-efficient federated learning with Douglas–Rachford splitting and error feedback JOURNAL=Frontiers in Artificial Intelligence VOLUME=Volume 9 - 2026 YEAR=2026 URL=https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2026.1699896 DOI=10.3389/frai.2026.1699896 ISSN=2624-8212 ABSTRACT=IntroductionFederated learning (FL) is a distributed machine learning paradigm that preserves data privacy and mitigates data silos. Nevertheless, frequent communication between clients and the server often becomes a major bottleneck, restricting training efficiency and scalability.MethodsTo address this challenge, we propose a novel communication-efficient algorithm, EF-Feddr, for federated composite optimization, where the objective function includes a potentially non-smooth regularization term and local datasets are non-IID. Our method is built upon the relaxed Douglas–Rachford splitting method and incorporates error feedback (EF)—a widely adopted compression framework—to ensure convergence when biased compression (e.g., top-k sparsification) is applied.ResultsUnder the partial client participation setting, our theoretical analysis demonstrates that EF-Feddr achieves a fast convergence rate of O(1/K) and a communication complexity of O(1/ε2). Comprehensive experiments conducted on the FEMNIST and Shakespeare benchmarks, as well as controlled synthetic data, consistently validate the efficacy of EF-Feddr across diverse scenarios.DiscussionThe results confirm that the integration of error feedback with the relaxed Douglas–Rachford splitting method in EF-Feddr effectively overcomes the convergence degradation typically caused by biased compression, thereby offering a practical and efficient solution for communication-constrained federated learning.