JMLR

Data-Driven Performance Guarantees for Classical and Learned Optimizers

Authors
Rajiv Sambharya Bartolomeo Stellato
Paper Information
  • Journal:
    Journal of Machine Learning Research
  • Added to Tracker:
    Sep 08, 2025
Abstract

We introduce a data-driven approach to analyze the performance of continuous optimization algorithms using generalization guarantees from statistical learning theory. We study classical and learned optimizers to solve families of parametric optimization problems. We build generalization guarantees for classical optimizers, using a sample convergence bound, and for learned optimizers, using the Probably Approximately Correct (PAC)-Bayes framework. To train learned optimizers, we use a gradient-based algorithm to directly minimize the PAC-Bayes upper bound. Numerical experiments in signal processing, control, and meta-learning showcase the ability of our framework to provide strong generalization guarantees for both classical and learned optimizers given a fixed budget of iterations. For classical optimizers, our bounds which hold with high probability are much tighter than those that worst-case guarantees provide. For learned optimizers, our bounds outperform the empirical outcomes observed in their non-learned counterparts.

Author Details
Rajiv Sambharya
Author
Bartolomeo Stellato
Author
Citation Information
APA Format
Rajiv Sambharya & Bartolomeo Stellato . Data-Driven Performance Guarantees for Classical and Learned Optimizers. Journal of Machine Learning Research .
BibTeX Format
@article{paper484,
  title = { Data-Driven Performance Guarantees for Classical and Learned Optimizers },
  author = { Rajiv Sambharya and Bartolomeo Stellato },
  journal = { Journal of Machine Learning Research },
  url = { https://www.jmlr.org/papers/v26/24-0755.html }
}