Efficient Adaptive Federated Optimization
Abstract
FedAda² and FedAda²++ are efficient adaptive optimization algorithms for federated learning that reduce communication and memory requirements while maintaining convergence rates comparable to resource-intensive methods.
Adaptive optimization is critical in federated learning, where enabling adaptivity on both the server and client sides has proven essential for achieving optimal performance. However, the scalability of such jointly adaptive systems is often hindered by resource limitations in communication and memory. In this paper, we introduce a class of efficient adaptive algorithms, named FedAda^2 and its enhanced version FedAda^2++, designed specifically for large-scale, cross-device federated environments. FedAda^2 optimizes communication efficiency by avoiding the transfer of preconditioners between the server and clients. Additionally, FedAda^2++ extends this approach by incorporating memory-efficient adaptive optimizers on the client side, further reducing on-device memory usage. Theoretically, we demonstrate that FedAda^2 and FedAda^2++ achieve the same convergence rates for general, non-convex objectives as its more resource-intensive counterparts that directly integrate joint adaptivity. Extensive empirical evaluations on image and text datasets demonstrate both the advantages of joint adaptivity and the effectiveness of FedAda^2/FedAda^2++.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper