Optimal Signalling in Attractor Neural Networks

Part of Advances in Neural Information Processing Systems 6 (NIPS 1993)

Bibtex Metadata Paper

Authors

Isaac Meilijson, Eytan Ruppin

Abstract

In [Meilijson and Ruppin, 1993] we presented a methodological framework describing the two-iteration performance of Hopfield(cid:173) like attractor neural networks with history-dependent, Bayesian dynamics. We now extend this analysis in a number of directions: input patterns applied to small subsets of neurons, general con(cid:173) nectivity architectures and more efficient use of history. We show that the optimal signal (activation) function has a slanted sigmQidal shape, and provide an intuitive account of activation functions with a non-monotone shape. This function endows the model with some properties characteristic of cortical neurons' firing.