Efficient Learning of Linear Perceptrons

Part of Advances in Neural Information Processing Systems 13 (NIPS 2000)

Bibtex Metadata Paper


Shai Ben-David, Hans-Ulrich Simon


We consider the existence of efficient algorithms for learning the class of half-spaces in ~n in the agnostic learning model (Le., mak(cid:173) ing no prior assumptions on the example-generating distribution). The resulting combinatorial problem - finding the best agreement half-space over an input sample - is NP hard to approximate to within some constant factor. We suggest a way to circumvent this theoretical bound by introducing a new measure of success for such algorithms. An algorithm is IL-margin successful if the agreement ratio of the half-space it outputs is as good as that of any half-space once training points that are inside the IL-margins of its separating hyper-plane are disregarded. We prove crisp computational com(cid:173) plexity results with respect to this success measure: On one hand, for every positive IL, there exist efficient (poly-time) IL-margin suc(cid:173) cessful learning algorithms. On the other hand, we prove that unless P=NP, there is no algorithm that runs in time polynomial in the sample size and in 1/ IL that is IL-margin successful for all IL> O.