Principled Penalty-based Methods for Bilevel Reinforcement Learning and RLHF
Authors
Paper Information
-
Journal:
Journal of Machine Learning Research -
Added to Tracker:
Jul 15, 2025
Abstract
Bilevel optimization has been recently applied to many machine learning tasks. However, their applications have been restricted to the supervised learning setting, where static objective functions with benign structures are considered. But bilevel problems such as incentive design, inverse reinforcement learning (RL), and RL from human feedback (RLHF) are often modeled as dynamic objective functions that go beyond the simple static objective structures, which pose significant challenges of using existing bilevel solutions. To tackle this new class of bilevel problems, we introduce the first principled algorithmic framework for solving bilevel RL problems through the lens of penalty formulation. We provide theoretical studies of the problem landscape and its penalty-based (policy) gradient algorithms. We demonstrate the effectiveness of our algorithms via simulations in the Stackelberg Markov game, RL from human feedback and incentive design.
Author Details
Han Shen
AuthorZhuoran Yang
AuthorTianyi Chen
AuthorCitation Information
APA Format
Han Shen
,
Zhuoran Yang
&
Tianyi Chen
.
Principled Penalty-based Methods for Bilevel Reinforcement Learning and RLHF.
Journal of Machine Learning Research
.
BibTeX Format
@article{JMLR:v26:24-0720,
author = {Han Shen and Zhuoran Yang and Tianyi Chen},
title = {Principled Penalty-based Methods for Bilevel Reinforcement Learning and RLHF},
journal = {Journal of Machine Learning Research},
year = {2025},
volume = {26},
number = {114},
pages = {1--49},
url = {http://jmlr.org/papers/v26/24-0720.html}
}