Papers
arxiv:2410.18117

Efficient Adaptive Federated Optimization

Published on Oct 10, 2024
Authors:
,
,
,

Abstract

FedAda² and FedAda²++ are efficient adaptive optimization algorithms for federated learning that reduce communication and memory requirements while maintaining convergence rates comparable to resource-intensive methods.

AI-generated summary

Adaptive optimization is critical in federated learning, where enabling adaptivity on both the server and client sides has proven essential for achieving optimal performance. However, the scalability of such jointly adaptive systems is often hindered by resource limitations in communication and memory. In this paper, we introduce a class of efficient adaptive algorithms, named FedAda^2 and its enhanced version FedAda^2++, designed specifically for large-scale, cross-device federated environments. FedAda^2 optimizes communication efficiency by avoiding the transfer of preconditioners between the server and clients. Additionally, FedAda^2++ extends this approach by incorporating memory-efficient adaptive optimizers on the client side, further reducing on-device memory usage. Theoretically, we demonstrate that FedAda^2 and FedAda^2++ achieve the same convergence rates for general, non-convex objectives as its more resource-intensive counterparts that directly integrate joint adaptivity. Extensive empirical evaluations on image and text datasets demonstrate both the advantages of joint adaptivity and the effectiveness of FedAda^2/FedAda^2++.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2410.18117 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2410.18117 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2410.18117 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.