Actor-Critic learning for mean-field control in continuous time
Authors
Paper Information
-
Journal:
Journal of Machine Learning Research -
Added to Tracker:
Sep 08, 2025
Abstract
We study policy gradient for mean-field control in continuous time in a reinforcement learning setting. By considering randomised policies with entropy regularisation, we derive a gradient expectation representation of the value function, which is amenable to actor-critic type algorithms, where the value functions and the policies are learnt alternately based on observation samples of the state and model-free estimation of the population state distribution, either by offline or online learning. In the linear-quadratic mean-field framework, we obtain an exact parametrisation of the actor and critic functions defined on the Wasserstein space. Finally, we illustrate the results of our algorithms with some numerical experiments on concrete examples.
Author Details
Noufel FRIKHA
AuthorMaximilien GERMAIN
AuthorMathieu LAURIERE
AuthorHuyen PHAM
AuthorXuanye SONG
AuthorCitation Information
APA Format
Noufel FRIKHA
,
Maximilien GERMAIN
,
Mathieu LAURIERE
,
Huyen PHAM
&
Xuanye SONG
.
Actor-Critic learning for mean-field control in continuous time.
Journal of Machine Learning Research
.
BibTeX Format
@article{paper528,
title = { Actor-Critic learning for mean-field control in continuous time },
author = {
Noufel FRIKHA
and Maximilien GERMAIN
and Mathieu LAURIERE
and Huyen PHAM
and Xuanye SONG
},
journal = { Journal of Machine Learning Research },
url = { https://www.jmlr.org/papers/v26/23-0345.html }
}