Information through a Spiking Neuron

Part of Advances in Neural Information Processing Systems 8 (NIPS 1995)

Bibtex Metadata Paper

Authors

Charles Stevens, Anthony Zador

Abstract

While it is generally agreed that neurons transmit information about their synaptic inputs through spike trains, the code by which this information is transmitted is not well understood. An upper bound on the information encoded is obtained by hypothesizing that the precise timing of each spike conveys information. Here we develop a general approach to quantifying the information carried by spike trains under this hypothesis, and apply it to the leaky integrate-and-fire (IF) model of neuronal dynamics. We formu(cid:173) late the problem in terms of the probability distribution peT) of interspike intervals (ISIs), assuming that spikes are detected with arbitrary but finite temporal resolution . In the absence of added noise, all the variability in the ISIs could encode information, and the information rate is simply the entropy of the lSI distribution, H (T) = (-p(T) log2 p(T)}, times the spike rate. H (T) thus pro(cid:173) vides an exact expression for the information rate. The methods developed here can be used to determine experimentally the infor(cid:173) mation carried by spike trains, even when the lower bound of the information rate provided by the stimulus reconstruction method is not tight. In a preliminary series of experiments, we have used these methods to estimate information rates of hippocampal neu(cid:173) rons in slice in response to somatic current injection. These pilot experiments suggest information rates as high as 6.3 bits/spike.

1

Information rate of spike trains

Cortical neurons use spike trains to communicate with other neurons. The output of each neuron is a stochastic function of its input from the other neurons. It is of interest to know how much each neuron is telling other neurons about its inputs.

How much information does the spike train provide about a signal? Consider noise net) added to a signal set) to produce some total input yet) = set) + net). This is then passed through a (possibly stochastic) functional F to produce the output spike train F[y(t)] --+ z(t). We assume that all the information contained in the spike train can be represented by the list of spike times; that is, there is no extra information contained in properties such as spike height or width. Note, however, that many characteristics of the spike train such as the mean or instantaneous rate

76

C. STEVENS, A. ZADOR

can be derived from this representation; if such a derivative property turns out to be the relevant one, then this formulation can be specialized appropriately.

We will be interested, then, in the mutual information 1(S(t); Z(t» between the input signal ensemble S(t) and the output spike train ensemble Z(t) . This is defined in terms of the entropy H(S) of the signal, the entropy H(Z) of the spike train, and their joint entropy H(S, Z),

1(S; Z) = H(S) + H(Z) - H(S, Z).

(1) Note that the mutual information is symmetric, 1(S; Z) = 1(Z; S), since the joint entropy H(S, Z) = H(Z, S). Note also that if the signal S(t) and the spike train Z(t) are completely independent, then the mutual information is 0, since the joint entropy is just the sum of the individual entropies H(S, Z) = H(S) + H(Z). This is completely in lin'e with our intuition, since in this case the spike train can provide no information about the signal.

Information estimation through stimulus reconstruction

1.1 Bialek and colleagues (Bialek et al., 1991) have used the reconstruction method to obtain a strict lower bound on the mutual information in an experimental set(cid:173) ting. This method is based on an expression mathematically equivalent to eq. (1) involving the conditional entropy H(SIZ) of the signal given the spike train,