Efficient Parallel Learning Algorithms for Neural Networks

Part of Advances in Neural Information Processing Systems 1 (NIPS 1988)

Bibtex Metadata Paper

Authors

Alan Kramer, Alberto Sangiovanni-Vincentelli

Abstract

Parallelizable optimization techniques are applied to the problem of learning in feedforward neural networks. In addition to having supe(cid:173) rior convergence properties, optimization techniques such as the Polak(cid:173) Ribiere method are also significantly more efficient than the Back(cid:173) propagation algorithm. These results are based on experiments per(cid:173) formed on small boolean learning problems and the noisy real-valued learning problem of hand-written character recognition.