Learning Continuous Attractors in Recurrent Networks

Part of Advances in Neural Information Processing Systems 10 (NIPS 1997)

Bibtex Metadata Paper

Authors

H. Sebastian Seung

Abstract

One approach to invariant object recognition employs a recurrent neu(cid:173) ral network as an associative memory. In the standard depiction of the network's state space, memories of objects are stored as attractive fixed points of the dynamics. I argue for a modification of this picture: if an object has a continuous family of instantiations, it should be represented by a continuous attractor. This idea is illustrated with a network that learns to complete patterns. To perform the task of filling in missing in(cid:173) formation, the network develops a continuous attractor that models the manifold from which the patterns are drawn. From a statistical view(cid:173) point, the pattern completion task allows a formulation of unsupervised learning in terms of regression rather than density estimation.

A classic approach to invariant object recognition is to use a recurrent neural net(cid:173) work as an associative memory[l]. In spite of the intuitive appeal and biological plausibility of this approach, it has largely been abandoned in practical applications. This paper introduces two new concepts that could help resurrect it: object repre(cid:173) sentation by continuous attractors, and learning attractors by pattern completion. In most models of associative memory, memories are stored as attractive fixed points at discrete locations in state space[l]. Discrete attractors may not be appropriate for patterns with continuous variability, like the images of a three-dimensional object from different viewpoints. When the instantiations of an object lie on a continuous pattern manifold, it is more appropriate to represent objects by attractive manifolds of fixed points, or continuous attractors. To make this idea practical, it is important to find methods for learning attractors from examples. A naive method is to train the network to retain examples in short(cid:173) term memory. This method is deficient because it does not prevent the network from storing spurious fixed points that are unrelated to the examples. A superior method is to train the network to restore examples that have been corrupted, so that it learns to complete patterns by filling in missing information.

Learning Continuous Attractors in Recurrent Networks