Losing Momentum in Continuous-time Stochastic Optimisation
Authors
Paper Information
-
Journal:
Journal of Machine Learning Research -
Added to Tracker:
Sep 08, 2025
Abstract
The training of modern machine learning models often consists in solving high-dimensional non-convex optimisation problems that are subject to large-scale data. In this context, momentum-based stochastic optimisation algorithms have become particularly widespread. The stochasticity arises from data subsampling which reduces computational cost. Both, momentum and stochasticity help the algorithm to converge globally. In this work, we propose and analyse a continuous-time model for stochastic gradient descent with momentum. This model is a piecewise-deterministic Markov process that represents the optimiser by an underdamped dynamical system and the data subsampling through a stochastic switching. We investigate longtime limits, the subsampling-to-no-subsampling limit, and the momentum-to-no-momentum limit. We are particularly interested in the case of reducing the momentum over time. Under convexity assumptions, we show convergence of our dynamical system to the global minimiser when reducing momentum over time and letting the subsampling rate go to infinity. We then propose a stable, symplectic discretisation scheme to construct an algorithm from our continuous-time dynamical system. In experiments, we study our scheme in convex and non-convex test problems. Additionally, we train a convolutional neural network in an image classification problem. Our algorithm attains competitive results compared to stochastic gradient descent with momentum.
Author Details
Kexin Jin
AuthorJonas Latz
AuthorChenguang Liu
AuthorAlessandro Scagliotti
AuthorCitation Information
APA Format
Kexin Jin
,
Jonas Latz
,
Chenguang Liu
&
Alessandro Scagliotti
.
Losing Momentum in Continuous-time Stochastic Optimisation.
Journal of Machine Learning Research
.
BibTeX Format
@article{paper507,
title = { Losing Momentum in Continuous-time Stochastic Optimisation },
author = {
Kexin Jin
and Jonas Latz
and Chenguang Liu
and Alessandro Scagliotti
},
journal = { Journal of Machine Learning Research },
url = { https://www.jmlr.org/papers/v26/23-1396.html }
}