Efficient Sampling for Learning Sparse Additive Models in High Dimensions

Part of Advances in Neural Information Processing Systems 27 (NIPS 2014)

Bibtex Metadata Paper Reviews Supplemental

Authors

Hemant Tyagi, Bernd Gärtner, Andreas Krause

Abstract

We consider the problem of learning sparse additive models, i.e., functions of the form: $f(\vecx) = \sum_{l \in S} \phi_{l}(x_l)$, $\vecx \in \matR^d$ from point queries of $f$. Here $S$ is an unknown subset of coordinate variables with $\abs{S} = k \ll d$. Assuming $\phi_l$'s to be smooth, we propose a set of points at which to sample $f$ and an efficient randomized algorithm that recovers a \textit{uniform approximation} to each unknown $\phi_l$. We provide a rigorous theoretical analysis of our scheme along with sample complexity bounds. Our algorithm utilizes recent results from compressive sensing theory along with a novel convex quadratic program for recovering robust uniform approximations to univariate functions, from point queries corrupted with arbitrary bounded noise. Lastly we theoretically analyze the impact of noise -- either arbitrary but bounded, or stochastic -- on the performance of our algorithm.