Maximum Causal Entropy IRL in Mean-Field Games and GNEP Framework for Forward RL
Authors
Research Topics
Paper Information
-
Journal:
Journal of Machine Learning Research -
Added to Tracker:
Sep 08, 2025
Abstract
This paper explores the use of Maximum Causal Entropy Inverse Reinforcement Learning (IRL) within the context of discrete-time stationary Mean-Field Games (MFGs) characterized by finite state spaces and an infinite-horizon, discounted-reward setting. Although the resulting optimization problem is non-convex with respect to policies, we reformulate it as a convex optimization problem in terms of state-action occupation measures by leveraging the linear programming framework of Markov Decision Processes. Based on this convex reformulation, we introduce a gradient descent algorithm with a guaranteed convergence rate to efficiently compute the optimal solution. Moreover, we develop a new method that conceptualizes the MFG problem as a Generalized Nash Equilibrium Problem (GNEP), enabling effective computation of the mean-field equilibrium for forward reinforcement learning (RL) problems and marking an advancement in MFG solution techniques. We further illustrate the practical applicability of our GNEP approach by employing this algorithm to generate data for numerical MFG examples.
Author Details
Berkay Anahtarci
AuthorCan Deha Kariksiz
AuthorNaci Saldi
AuthorResearch Topics & Keywords
Causal Inference
Research AreaCitation Information
APA Format
Berkay Anahtarci
,
Can Deha Kariksiz
&
Naci Saldi
.
Maximum Causal Entropy IRL in Mean-Field Games and GNEP Framework for Forward RL.
Journal of Machine Learning Research
.
BibTeX Format
@article{paper534,
title = { Maximum Causal Entropy IRL in Mean-Field Games and GNEP Framework for Forward RL },
author = {
Berkay Anahtarci
and Can Deha Kariksiz
and Naci Saldi
},
journal = { Journal of Machine Learning Research },
url = { https://www.jmlr.org/papers/v26/24-0458.html }
}