JMLR

Learning-to-Optimize with PAC-Bayesian Guarantees: Theoretical Considerations and Practical Implementation

Authors
Michael Sucker Jalal Fadili Peter Ochs
Research Topics
Bayesian Statistics
Paper Information
  • Journal:
    Journal of Machine Learning Research
  • Added to Tracker:
    Dec 30, 2025
Abstract

We use the PAC-Bayesian theory for the setting of learning-to-optimize. To the best of our knowledge, we present the first framework to learn optimization algorithms with provable generalization guarantees (PAC-Bayesian bounds) and explicit trade-off between convergence guarantees and convergence speed, which contrasts with the typical worst-case analysis. Our learned optimization algorithms provably outperform related ones derived from a worst-case analysis. The results rely on PAC-Bayesian bounds for general, possibly unbounded loss-functions based on exponential families. Further, we provide a concrete algorithmic realization of the framework and new methodologies for learning-to-optimize. Finally, we conduct four practically relevant experiments to support our theory. With this, we showcase that the provided learning framework yields optimization algorithms that provably outperform the state-of-the-art by orders of magnitude.

Author Details
Michael Sucker
Author
Jalal Fadili
Author
Peter Ochs
Author
Research Topics & Keywords
Bayesian Statistics
Research Area
Citation Information
APA Format
Michael Sucker , Jalal Fadili & Peter Ochs . Learning-to-Optimize with PAC-Bayesian Guarantees: Theoretical Considerations and Practical Implementation. Journal of Machine Learning Research .
BibTeX Format
@article{paper719,
  title = { Learning-to-Optimize with PAC-Bayesian Guarantees: Theoretical Considerations and Practical Implementation },
  author = { Michael Sucker and Jalal Fadili and Peter Ochs },
  journal = { Journal of Machine Learning Research },
  url = { https://www.jmlr.org/papers/v26/24-0486.html }
}