Error estimation and adaptive tuning for unregularized robust M-estimator
Authors
Paper Information
-
Journal:
Journal of Machine Learning Research -
Added to Tracker:
Jul 30, 2025
Abstract
We consider unregularized robust M-estimators for linear models under Gaussian design and heavy-tailed noise, in the proportional asymptotics regime where the sample size n and the number of features p are both increasing such that $p/n \to \gamma\in (0,1)$. An estimator of the out-of-sample error of a robust M-estimator is analyzed and proved to be consistent for a large family of loss functions that includes the Huber loss. As an application of this result, we propose an adaptive tuning procedure of the scale parameter $\lambda>0$ of a given loss function $\rho$: choosing $\hat \lambda$ in a given interval $I$ that minimizes the out-of-sample error estimate of the M-estimator constructed with loss $\rho_\lambda(\cdot) = \lambda^2 \rho(\cdot/\lambda)$ leads to the optimal out-of-sample error over $I$. The proof relies on a smoothing argument: the unregularized M-estimation objective function is perturbed, or smoothed, with a Ridge penalty that vanishes as $n\to+\infty$, and shows that the unregularized M-estimator of interest inherits properties of its smoothed version.
Author Details
Pierre C. Bellec
AuthorTakuya Koriyama
AuthorCitation Information
APA Format
Pierre C. Bellec
&
Takuya Koriyama
.
Error estimation and adaptive tuning for unregularized robust M-estimator.
Journal of Machine Learning Research
.
BibTeX Format
@article{paper301,
title = { Error estimation and adaptive tuning for unregularized robust M-estimator },
author = {
Pierre C. Bellec
and Takuya Koriyama
},
journal = { Journal of Machine Learning Research },
url = { https://www.jmlr.org/papers/v26/24-0060.html }
}