How Receptive Field Parameters Affect Neural Learning

Part of Advances in Neural Information Processing Systems 3 (NIPS 1990)

Bibtex Metadata Paper

Authors

Bartlett Mel, Stephen Omohundro

Abstract

We identify the three principle factors affecting the performance of learn(cid:173) ing by networks with localized units: unit noise, sample density, and the structure of the target function. We then analyze the effect of unit recep(cid:173) tive field parameters on these factors and use this analysis to propose a new learning algorithm which dynamically alters receptive field properties during learning.

1 LEARNING WITH LOCALIZED RECEPTIVE FIELDS

Locally-tuned representations are common in both biological and artificial neural networks. Several workers have analyzed the effect of receptive field size, shape, and overlap on representation accuracy: (Baldi, 1988), (Ballard, 1987), and (Hinton, 1986). This paper investigates the additional interactions introduced by the task of function learning. Previous studies which have considered learning have for the most part restricted attention to the use of the input probability distribution to determine receptive field layout (Kohonen, 1984) and (Moody and Darken, 1989). We will see that the structure of the function being learned may also be advantageously taken into account.

Function learning using radial basis functions (RBF's) is currently a popular tech(cid:173) nique (Broomhead and Lowe, 1988) and serves as an adequate framework for our discussion. Because we are interested in constraints on biological systems, we must explictly consider the effects of unit noise. The goal is to choose the layout of receptive fields so as to minimize average performance error. Let y = f(x) be the function the network is attempting to learn from example