Scalable Non-linear Learning with Adaptive Polynomial Expansions

Part of Advances in Neural Information Processing Systems 27 (NIPS 2014)

Bibtex »Metadata »Paper »Reviews »Supplemental »


Alekh Agarwal, Alina Beygelzimer, Daniel J. Hsu, John Langford, Matus J. Telgarsky


Can we effectively learn a nonlinear representation in time comparable to linear learning? We describe a new algorithm that explicitly and adaptively expands higher-order interaction features over base linear representations. The algorithm is designed for extreme computational efficiency, and an extensive experimental study shows that its computation/prediction tradeoff ability compares very favorably against strong baselines.