An interior-point stochastic approximation method and an L1-regularized delta rule

Part of Advances in Neural Information Processing Systems 21 (NIPS 2008)

Bibtex Metadata Paper Supplemental

Authors

Peter Carbonetto, Mark Schmidt, Nando Freitas

Abstract

The stochastic approximation method is behind the solution to many important, actively-studied problems in machine learning. Despite its far-reaching application, there is almost no work on applying stochastic approximation to learning problems with constraints. The reason for this, we hypothesize, is that no robust, widely-applicable stochastic approximation method exists for handling such problems. We propose that interior-point methods are a natural solution. We establish the stability of a stochastic interior-point approximation method both analytically and empirically, and demonstrate its utility by deriving an on-line learning algorithm that also performs feature selection via L1 regularization.