JMLR

Nonconvex Stochastic Bregman Proximal Gradient Method with Application to Deep Learning

Authors
Kuangyu Ding Jingyang Li Kim-Chuan Toh
Research Topics
Machine Learning
Paper Information
  • Journal:
    Journal of Machine Learning Research
  • Added to Tracker:
    Jul 15, 2025
Abstract

Stochastic gradient methods for minimizing nonconvex composite objective functions typically rely on the Lipschitz smoothness of the differentiable part, but this assumption fails in many important problem classes like quadratic inverse problems and neural network training, leading to instability of the algorithms in both theory and practice. To address this, we propose a family of stochastic Bregman proximal gradient (SBPG) methods that only require smooth adaptivity. SBPG replaces the quadratic approximation in SGD with a Bregman proximity measure, offering a better approximation model that handles non-Lipschitz gradients in nonconvex objectives. We establish the convergence properties of vanilla SBPG and show it achieves optimal sample complexity in the nonconvex setting. Experimental results on quadratic inverse problems demonstrate SBPG's robustness in terms of stepsize selection and sensitivity to the initial point. Furthermore, we introduce a momentum-based variant, MSBPG, which enhances convergence by relaxing the mini-batch size requirement while preserving the optimal oracle complexity. We apply MSBPG to the training of deep neural networks, utilizing a polynomial kernel function to ensure smooth adaptivity of the loss function. Experimental results on benchmark datasets confirm the effectiveness and robustness of MSBPG in training neural networks. Given its negligible additional computational cost compared to SGD in large-scale optimization, MSBPG shows promise as a universal open-source optimizer for future applications.

Author Details
Kuangyu Ding
Author
Jingyang Li
Author
Kim-Chuan Toh
Author
Research Topics & Keywords
Machine Learning
Research Area
Citation Information
APA Format
Kuangyu Ding , Jingyang Li & Kim-Chuan Toh . Nonconvex Stochastic Bregman Proximal Gradient Method with Application to Deep Learning. Journal of Machine Learning Research .
BibTeX Format
@article{JMLR:v26:23-0657,
  author  = {Kuangyu Ding and Jingyang Li and Kim-Chuan Toh},
  title   = {Nonconvex Stochastic Bregman Proximal Gradient Method with Application to Deep Learning},
  journal = {Journal of Machine Learning Research},
  year    = {2025},
  volume  = {26},
  number  = {39},
  pages   = {1--44},
  url     = {http://jmlr.org/papers/v26/23-0657.html}
}
Related Papers