JMLR

Nonconvex Stochastic Bregman Proximal Gradient Method with Application to Deep Learning

Authors
Jingyang Li Kuangyu Ding Kim-Chuan Toh
Research Topics
Machine Learning
Paper Information
  • Journal:
    Journal of Machine Learning Research
  • Added to Tracker:
    Jul 30, 2025
Abstract

Stochastic gradient methods for minimizing nonconvex composite objective functions typically rely on the Lipschitz smoothness of the differentiable part, but this assumption fails in many important problem classes like quadratic inverse problems and neural network training, leading to instability of the algorithms in both theory and practice. To address this, we propose a family of stochastic Bregman proximal gradient (SBPG) methods that only require smooth adaptivity. SBPG replaces the quadratic approximation in SGD with a Bregman proximity measure, offering a better approximation model that handles non-Lipschitz gradients in nonconvex objectives. We establish the convergence properties of vanilla SBPG and show it achieves optimal sample complexity in the nonconvex setting. Experimental results on quadratic inverse problems demonstrate SBPG's robustness in terms of stepsize selection and sensitivity to the initial point. Furthermore, we introduce a momentum-based variant, MSBPG, which enhances convergence by relaxing the mini-batch size requirement while preserving the optimal oracle complexity. We apply MSBPG to the training of deep neural networks, utilizing a polynomial kernel function to ensure smooth adaptivity of the loss function. Experimental results on benchmark datasets confirm the effectiveness and robustness of MSBPG in training neural networks. Given its negligible additional computational cost compared to SGD in large-scale optimization, MSBPG shows promise as a universal open-source optimizer for future applications.

Author Details
Jingyang Li
Author
Kuangyu Ding
Author
Kim-Chuan Toh
Author
Research Topics & Keywords
Machine Learning
Research Area
Citation Information
APA Format
Jingyang Li , Kuangyu Ding & Kim-Chuan Toh . Nonconvex Stochastic Bregman Proximal Gradient Method with Application to Deep Learning. Journal of Machine Learning Research .
BibTeX Format
@article{paper257,
  title = { Nonconvex Stochastic Bregman Proximal Gradient Method with Application to Deep Learning },
  author = { Jingyang Li and Kuangyu Ding and Kim-Chuan Toh },
  journal = { Journal of Machine Learning Research },
  url = { https://www.jmlr.org/papers/v26/23-0657.html }
}