Complexity of Finite Precision Neural Network Classifier

Part of Advances in Neural Information Processing Systems 2 (NIPS 1989)

Bibtex Metadata Paper

Authors

Amir Dembo, Kai-Yeung Siu, Thomas Kailath

Abstract

A rigorous analysis on the finite precision computational <)Spects of neural network as a pattern classifier via a probabilistic approach is presented. Even though there exist negative results on the capa(cid:173) bility of perceptron, we show the following positive results: Given n pattern vectors each represented by en bits where e > 1, that are uniformly distributed, with high probability the perceptron can perform all possible binary classifications of the patterns. More(cid:173) over, the resulting neural network requires a vanishingly small pro(cid:173) portion O(log n/n) of the memory that would be required for com(cid:173) plete storage of the patterns. Further, the perceptron algorithm takes O(n2) arithmetic operations with high probability, whereas other methods such as linear programming takes O(n3 .5 ) in the worst case. We also indicate some mathematical connections with VLSI circuit testing and the theory of random matrices.