Permitted and Forbidden Sets in Symmetric Threshold-Linear Networks

Part of Advances in Neural Information Processing Systems 13 (NIPS 2000)

Bibtex Metadata Paper

Authors

Richard Hahnloser, H. Sebastian Seung

Abstract

Ascribing computational principles to neural feedback circuits is an important problem in theoretical neuroscience. We study symmet(cid:173) ric threshold-linear networks and derive stability results that go beyond the insights that can be gained from Lyapunov theory or energy functions. By applying linear analysis to subnetworks com(cid:173) posed of coactive neurons, we determine the stability of potential steady states. We find that stability depends on two types of eigen(cid:173) modes. One type determines global stability and the other type determines whether or not multistability is possible. We can prove the equivalence of our stability criteria with criteria taken from quadratic programming. Also, we show that there are permitted sets of neurons that can be coactive at a steady state and forbid(cid:173) den sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we can provide a formulation of long(cid:173) term memory that is more general than the traditional perspective of fixed point attractor networks.

A Lyapunov-function can be used to prove that a given set of differential equations is convergent. For example, if a neural network possesses a Lyapunov-function, then for almost any initial condition, the outputs of the neurons converge to a stable steady state. In the past, this stability-property was used to construct attractor networks that associatively recall memorized patterns. Lyapunov theory applies mainly to symmetric networks in which neurons have monotonic activation functions [1, 2]. Here we show that the restriction of activation functions to threshold-linear ones is not a mere limitation, but can yield new insights into the computational behavior of recurrent networks (for completeness, see also [3]).

We present three main theorems about the neural responses to constant inputs. The first theorem provides necessary and sufficient conditions on the synaptic weight ma(cid:173) trix for the existence of a globally asymptotically stable set of fixed points. These conditions can be expressed in terms of copositivity, a concept from quadratic pro(cid:173) gramming and linear complementarity theory. Alternatively, they can be expressed in terms of certain eigenvalues and eigenvectors of submatrices of the synaptic weight matrix, making a connection to linear systems theory. The theorem guarantees that

the network will produce a steady state response to any constant input. We regard this response as the computational output of the network, and its characterization is the topic of the second and third theorems.

In the second theorem, we introduce the idea of permitted and forbidden sets. Under certain conditions on the synaptic weight matrix, we show that there exist sets of neurons that are "forbidden" by the recurrent synaptic connections from being coactivated at a stable steady state, no matter what input is applied. Other sets are "permitted," in the sense that they can be coactivated for some input. The same conditions on the synaptic weight matrix also lead to conditional multistability, meaning that there exists an input for which there is more than one stable steady state. In other words, forbidden sets and conditional multistability are inseparable concepts.

The existence of permitted and forbidden sets suggests a new way of thinking about memory in neural networks. When an input is applied, the network must select a set of active neurons, and this selection is constrained to be one of the permitted sets. Therefore the permitted sets can be regarded as memories stored in the synaptic connections.

Our third theorem states that there are constraints on the groups of permitted and forbidden sets that can be stored by a network. No matter which learning algorithm is used to store memories, active neurons cannot arbitrarily be divided into permitted and forbidden sets, because subsets of permitted sets have to be permitted and supersets of forbidden sets have to be forbidden.

1 Basic definitions

Our theory is applicable to the network dynamics