JMLR

Error estimation and adaptive tuning for unregularized robust M-estimator

Authors
Pierre C. Bellec Takuya Koriyama
Paper Information
  • Journal:
    Journal of Machine Learning Research
  • Added to Tracker:
    Jul 15, 2025
Abstract

We consider unregularized robust M-estimators for linear models under Gaussian design and heavy-tailed noise, in the proportional asymptotics regime where the sample size n and the number of features p are both increasing such that $p/n \to \gamma\in (0,1)$. An estimator of the out-of-sample error of a robust M-estimator is analyzed and proved to be consistent for a large family of loss functions that includes the Huber loss. As an application of this result, we propose an adaptive tuning procedure of the scale parameter $\lambda>0$ of a given loss function $\rho$: choosing $\hat \lambda$ in a given interval $I$ that minimizes the out-of-sample error estimate of the M-estimator constructed with loss $\rho_\lambda(\cdot) = \lambda^2 \rho(\cdot/\lambda)$ leads to the optimal out-of-sample error over $I$. The proof relies on a smoothing argument: the unregularized M-estimation objective function is perturbed, or smoothed, with a Ridge penalty that vanishes as $n\to+\infty$, and shows that the unregularized M-estimator of interest inherits properties of its smoothed version.

Author Details
Pierre C. Bellec
Author
Takuya Koriyama
Author
Citation Information
APA Format
Pierre C. Bellec & Takuya Koriyama . Error estimation and adaptive tuning for unregularized robust M-estimator. Journal of Machine Learning Research .
BibTeX Format
@article{JMLR:v26:24-0060,
  author  = {Pierre C. Bellec and Takuya Koriyama},
  title   = {Error estimation and adaptive tuning for unregularized robust M-estimator},
  journal = {Journal of Machine Learning Research},
  year    = {2025},
  volume  = {26},
  number  = {16},
  pages   = {1--40},
  url     = {http://jmlr.org/papers/v26/24-0060.html}
}
Related Papers