Learning-to-Optimize with PAC-Bayesian Guarantees: Theoretical Considerations and Practical Implementation
Authors
Research Topics
Paper Information
-
Journal:
Journal of Machine Learning Research -
Added to Tracker:
Dec 30, 2025
Abstract
We use the PAC-Bayesian theory for the setting of learning-to-optimize. To the best of our knowledge, we present the first framework to learn optimization algorithms with provable generalization guarantees (PAC-Bayesian bounds) and explicit trade-off between convergence guarantees and convergence speed, which contrasts with the typical worst-case analysis. Our learned optimization algorithms provably outperform related ones derived from a worst-case analysis. The results rely on PAC-Bayesian bounds for general, possibly unbounded loss-functions based on exponential families. Further, we provide a concrete algorithmic realization of the framework and new methodologies for learning-to-optimize. Finally, we conduct four practically relevant experiments to support our theory. With this, we showcase that the provided learning framework yields optimization algorithms that provably outperform the state-of-the-art by orders of magnitude.
Author Details
Michael Sucker
AuthorJalal Fadili
AuthorPeter Ochs
AuthorResearch Topics & Keywords
Bayesian Statistics
Research AreaCitation Information
APA Format
Michael Sucker
,
Jalal Fadili
&
Peter Ochs
.
Learning-to-Optimize with PAC-Bayesian Guarantees: Theoretical Considerations and Practical Implementation.
Journal of Machine Learning Research
.
BibTeX Format
@article{paper719,
title = { Learning-to-Optimize with PAC-Bayesian Guarantees: Theoretical Considerations and Practical Implementation },
author = {
Michael Sucker
and Jalal Fadili
and Peter Ochs
},
journal = { Journal of Machine Learning Research },
url = { https://www.jmlr.org/papers/v26/24-0486.html }
}