Extensions of a Theory of Networks for Approximation and Learning: Outliers and Negative Examples

Part of Advances in Neural Information Processing Systems 3 (NIPS 1990)

Bibtex Metadata Paper

Authors

Federico Girosi, Tomaso Poggio, Bruno Caprile

Abstract

Learning an input-output mapping from a set of examples can be regarded as synthesizing an approximation of a multi-dimensional function. From this point of view, this form of learning is closely related to regularization theory, and we have previously shown (Poggio and Girosi, 1990a, 1990b) the equivalence between reglilari~at.ioll and a. class of three-layer networks that we call regularization networks. In this note, we ext.end the theory by introducing ways of