Fundamental Limits of Membership Inference Attacks on Machine Learning Models
Authors
Research Topics
Paper Information
-
Journal:
Journal of Machine Learning Research -
Added to Tracker:
Dec 30, 2025
Abstract
Membership inference attacks (MIA) can reveal whether a particular data point was part of the training dataset, potentially exposing sensitive information about individuals. This article provides theoretical guarantees by exploring the fundamental statistical limitations associated with MIAs on machine learning models at large. More precisely, we first derive the statistical quantity that governs the effectiveness and success of such attacks. We then theoretically prove that in a non-linear regression setting with overfitting learning procedures, attacks may have a high probability of success. Finally, we investigate several situations for which we provide bounds on this quantity of interest. Interestingly, our findings indicate that discretizing the data might enhance the learning procedure's security. Specifically, it is demonstrated to be limited by a constant, which quantifies the diversity of the underlying data distribution. We illustrate those results through simple simulations.
Author Details
Elisabeth Gassiat
AuthorEric Aubinais
AuthorPablo Piantanida
AuthorResearch Topics & Keywords
Machine Learning
Research AreaCitation Information
APA Format
Elisabeth Gassiat
,
Eric Aubinais
&
Pablo Piantanida
.
Fundamental Limits of Membership Inference Attacks on Machine Learning Models.
Journal of Machine Learning Research
.
BibTeX Format
@article{paper667,
title = { Fundamental Limits of Membership Inference Attacks on Machine Learning Models },
author = {
Elisabeth Gassiat
and Eric Aubinais
and Pablo Piantanida
},
journal = { Journal of Machine Learning Research },
url = { https://www.jmlr.org/papers/v26/24-1515.html }
}