The Blessing of Heterogeneity in Federated Q-Learning: Linear Speedup and Beyond
Authors
Paper Information
-
Journal:
Journal of Machine Learning Research -
Added to Tracker:
Jul 15, 2025
Abstract
In this paper, we consider federated Q-learning, which aims to learn an optimal Q-function by periodically aggregating local Q-estimates trained on local data alone. Focusing on infinite-horizon tabular Markov decision processes, we provide sample complexity guarantees for both the synchronous and asynchronous variants of federated Q-learning, which exhibit a linear speedup with respect to the number of agents and near-optimal dependencies on other salient problem parameters. In the asynchronous setting, existing analyses of federated Q-learning, which adopt an equally weighted averaging of local Q-estimates, require that every agent covers the entire state-action space. In contrast, our improved sample complexity scales inverse proportionally to the minimum entry of the average stationary state-action occupancy distribution of all agents, thus only requiring the agents to collectively cover the entire state-action space, unveiling the blessing of heterogeneity. However, its sample complexity still suffers when the local trajectories are highly heterogeneous. In response, we propose a novel federated Q-learning algorithm with importance averaging, giving larger weights to more frequently visited state-action pairs, which achieves a robust linear speedup as if all trajectories are centrally processed, regardless of the heterogeneity of local behavior policies.
Author Details
Jiin Woo
AuthorGauri Joshi
AuthorYuejie Chi
AuthorCitation Information
APA Format
Jiin Woo
,
Gauri Joshi
&
Yuejie Chi
.
The Blessing of Heterogeneity in Federated Q-Learning: Linear Speedup and Beyond.
Journal of Machine Learning Research
.
BibTeX Format
@article{JMLR:v26:24-0579,
author = {Jiin Woo and Gauri Joshi and Yuejie Chi},
title = {The Blessing of Heterogeneity in Federated Q-Learning: Linear Speedup and Beyond},
journal = {Journal of Machine Learning Research},
year = {2025},
volume = {26},
number = {26},
pages = {1--85},
url = {http://jmlr.org/papers/v26/24-0579.html}
}