Generalization Error Bounds for Aggregation by Mirror Descent with Averaging

Part of Advances in Neural Information Processing Systems 18 (NIPS 2005)

Bibtex Metadata Paper

Authors

Anatoli Juditsky, Alexander Nazin, Alexandre Tsybakov, Nicolas Vayatis

Abstract

We consider the problem of constructing an aggregated estimator from a finite class of base functions which approximately minimizes a con- vex risk functional under the ℓ1 constraint. For this purpose, we propose a stochastic procedure, the mirror descent, which performs gradient de- scent in the dual space. The generated estimates are additionally aver- aged in a recursive fashion with specific weights. Mirror descent algo- rithms have been developed in different contexts and they are known to be particularly efficient in high dimensional problems. Moreover their implementation is adapted to the online setting. The main result of the paper is the upper bound on the convergence rate for the generalization error.