JMLR
Sample Complexity of the Linear Quadratic Regulator: A Reinforcement Learning Lens
Authors
Amirreza Neshaei Moghaddam
Alex Olshevsky
Bahman Gharesifard
Paper Information
-
Journal:
Journal of Machine Learning Research -
Added to Tracker:
Sep 08, 2025
Abstract
We provide the first known algorithm that provably achieves $\varepsilon$-optimality within $\widetilde{O}(1/\varepsilon)$ function evaluations for the discounted discrete-time linear quadratic regulator problem with unknown parameters, without relying on two-point gradient estimates. These estimates are known to be unrealistic in many settings, as they depend on using the exact same initialization, which is to be selected randomly, for two different policies. Our results substantially improve upon the existing literature outside the realm of two-point gradient estimates, which either leads to $\widetilde{O}(1/\varepsilon^2)$ rates or heavily relies on stability assumptions.
Author Details
Amirreza Neshaei Moghaddam
AuthorAlex Olshevsky
AuthorBahman Gharesifard
AuthorCitation Information
APA Format
Amirreza Neshaei Moghaddam
,
Alex Olshevsky
&
Bahman Gharesifard
.
Sample Complexity of the Linear Quadratic Regulator: A Reinforcement Learning Lens.
Journal of Machine Learning Research
.
BibTeX Format
@article{paper504,
title = { Sample Complexity of the Linear Quadratic Regulator: A Reinforcement Learning Lens },
author = {
Amirreza Neshaei Moghaddam
and Alex Olshevsky
and Bahman Gharesifard
},
journal = { Journal of Machine Learning Research },
url = { https://www.jmlr.org/papers/v26/24-0636.html }
}