{"title": "When is an Integrate-and-fire Neuron like a Poisson Neuron?", "book": "Advances in Neural Information Processing Systems", "page_first": 103, "page_last": 109, "abstract": null, "full_text": "When is an Integrate-and-fire Neuron \n\nlike a Poisson Neuron? \n\nCharles F. Stevens \nSalk Institute MNL/S \n\nLa Jolla, CA 92037 \n\ncfs@salk.edu \n\nAnthony Zador \n\nSalk Institute MNL/S \n\nLa Jolla, CA 92037 \n\nzador@salk.edu \n\nAbstract \n\nIn the Poisson neuron model, the output is a rate-modulated Pois(cid:173)\nson process (Snyder and Miller, 1991); the time varying rate pa(cid:173)\nrameter ret) is an instantaneous function G[.] of the stimulus, \nret) = G[s(t)]. In a Poisson neuron, then, ret) gives the instan(cid:173)\ntaneous firing rate-the instantaneous probability of firing at any \ninstant t-and the output is a stochastic function of the input. In \npart because of its great simplicity, this model is widely used (usu(cid:173)\nally with the addition of a refractory period), especially in in vivo \nsingle unit electrophysiological studies, where set) is usually taken \nto be the value of some sensory stimulus. In the integrate-and-fire \nneuron model, by contrast, the output is a filtered and thresholded \nfunction of the input: the input is passed through a low-pass filter \n(determined by the membrane time constant T) and integrated un(cid:173)\ntil the membrane potential vet) reaches threshold 8, at which point \nvet) is reset to its initial value. By contrast with the Poisson model, \nin the integrate-and-fire model the ouput is a deterministic function \nof the input. Although the integrate-and-fire model is a caricature \nof real neural dynamics, it captures many of the qualitative fea(cid:173)\ntures, and is often used as a starting point for conceptualizing the \nbiophysical behavior of single neurons. Here we show how a slightly \nmodified Poisson model can be derived from the integrate-and-fire \nmodel with noisy inputs yet) = set) + net). In the modified model, \nthe transfer function G[.] is a sigmoid (erf) whose shape is deter(cid:173)\nmined by the noise variance /T~. Understanding the equivalence \nbetween the dominant in vivo and in vitro simple neuron models \nmay help forge links between the two levels. \n\n\f104 \n\n1 \n\nIntroduction \n\nc. F. STEVENS. A. ZADOR \n\nIn the Poisson neuron model, the output is a rate-modulated Poisson process; the \ntime varying rate parameter ret) is an instantaneous function G[.] of the stimu(cid:173)\nlus, ret) = G[s(t)]. In a Poisson neuron, then, ret) gives the instantaneous firing \nrate-the instantaneous probability of firing at any instant t-and the output is a \nstochastic function of the input. In part because of its great simplicity, this model \nis widely used (usually with the addition of a refractory period), especially in in \nvivo single unit electrophysiological studies, where set) is usually taken to be the \nvalue of some sensory stimulus. \n\nIn the integrate-and-fire neuron model, by contrast, the output is a filtered and \nthresholded function of the input: the input is passed through a low-pass filter \n(determined by the membrane time constant T) and integrated until the membrane \npotential vet) reaches threshold 0, at which point vet) is reset to its initial value. \nBy contrast with the Poisson model, in the integrate-and-fire model the ouput is \na deterministic function of the input. Although the integrate-and-fire model is a \ncaricature of real neural dynamics, it captures many of the qualitative features, and \nis often used as a starting point for conceptualizing the biophysical behavior of single \nneurons (Softky and Koch , 1993; Amit and Tsodyks, 1991; Shadlen and Newsome, \n1995; Shadlen and Newsome, 1994; Softky, 1995; DeWeese, 1995; DeWeese, 1996; \nZador and Pearlmutter, 1996). \n\nHere we show how a slightly modified Poisson model can be derived from the \nintegrate-and-fire model with noisy inputs yet) = set) + net). \nIn the modified \nmodel, the transfer function G[.] is a sigmoid (erf) whose shape is determined by \nthe noise variance (j~ . Understanding the equivalence between the dominant in vivo \nand in vitro simple neuron models may help forge links between the two levels. \n\n2 The integrate-and-fire model \n\nHere we describe the the forgetful leaky integrate-and-fire model. Suppose we add \na signal set) to some noise net), \n\nyet) = net) + set), \n\nand threshold the sum to produce a spike train \n\nz(t) = F[s(t) + net)], \n\nwhere F is the thresholding functional and z(t) is a list of firing times generated by \nthe input. Specifically, suppose the voltage vet) of the neuron obeys \n\nvet) = - vet) + yet) \n\nT \n\n(1) \n\nwhere T is the membrane time constant. We assume that the noise net) has O-mean \nand is white with variance (j~. Thus yet) can be thought of as a Gaussian white \nprocess with variance (j~ and a time-varying mean set) . If the voltage reaches the \nthreshold 00 at some time t, the neuron emits a spike at that time and resets to \nthe initial condition Vo. This is therefore a 5 parameter model: the membrane \ntime constant T, the mean input signal Il, the variance of the input signal 172 , the \nthreshold 0, and the reset value Vo. Of course, if net) = 0, we recover a purely \ndeterministic integrate-and-fire model. \n\n\fWhen Is an Integrate-and-fire Neuron like a Poisson Neuron? \n\n105 \n\nIn order to forge the link between the integrate-and-fire neuron dynamics and the \nPoisson model, we will treat the firing times T probabilistically. That is, we will \nexpress the output of the neuron to some particular input set) as a conditional \ndistribution p(Tls(t\u00bb, i.e. \nthe probability of obtaining any firing time T given \nsome particular input set) . \n\nUnder these assumptions, peT) is given by the first passage time distribution \n(FPTD) of the Ornstein-Uhlenbeck process (Uhlenbeck and Ornstein, 1930; Tuck(cid:173)\nwell, 1988). This means that the time evolution of the voltage prior to reaching \nthreshold is given by the Fokker-Planck equation (FPE), \n\n8 \n8t g(t, v) = 2 8v2 get, v) - av [(set) - --;:- )g(t, v)], \n\nvet) \n\n8 \n\nu; 82 \n\nwhere uy = Un and get, v) is the distribution at time t of voltage -00 < v ::; (}o. \nThen the first passage time distribution is related to g( v, t) by \n\n(2) \n\n(3) \n\n81 90 \n\npeT) = - at \n\n-00 get, v)dv. \n\nThe integrand is the fraction of all paths that p.ave not yet crossed threshold. peT) \nis therefore just the interspike interval (lSI) distribution for a given signal set). A \ngeneral eigenfunction expansion solution for the lSI distribution is known, but it \nconverges slowly and its terms offer little insight into the behavior (at least to us) . \n\nWe now derive an expression for the probability of crossing threshold in some very \nshort interval ~t, starting at some v. We begin with the \"free\" distribution of g \n(Tuckwell, 1988): the probability of the voltage jumping to v' at time t' = t + ~t, \ngiven that it was at v at time t, assuming von Neumann boundary conditions at \nplus and minus infinity, \n\nget', v'lt, v) = \n\n1 \n\n[ \nexp -\n\nJ27r q(~t;Uy) \n\n(v' - m( ~t; u \u00bb)2] \n\ny , \n\n2 q(~t;Uy) \n\n(4) \n\nwith \n\nand \n\nm(~t) = ve- at / T + set) * T(l _ e- at / T ), \n\nwhere * denotes convolution. The free distribution is a Gaussian with a time(cid:173)\ndependent mean m(~t) and variance q(~t; uy). This expression is valid for all ~t. \nThe probability of making a jump \n\nin a short interval ~t ~ T depends only on ~v and ~t, \n\n~v = v' - v \n\nga(~t, ~v; uy) = \n\nFor small ~t, we expand to get \n\n1 \n\n..j27r qa(uy) \n\nexp [_ ~~2 )]. \n\n2 qa uy \n\n(5) \n\nwhich is independent of T, showing that the leak can be neglected for short times. \n\nqa(uy) :::::: 2u;~t, \n\n\f106 \n\nc. F. STEVENS, A. ZADOR \n\nNow the probability Pt>, that the voltage exceeds threshold in some short Ilt, given \nthat it started at v, depends on how far v is from threshold; it is \n\nPr[v + Ilv ~ 0] = Pr[llv ~ 0 - v]. \n\nThus \n\n(6) \n\n(Xl dvgt>,(llt, v; O\"y) \nJ9-v \n1 (o-v) \n1 (o-v) \n\nJ2qt>,(O\"y) \n\nO\"yJ21lt \n\n-erfc \n2 \n\n-erfc \n2 \n\nwhere erfc(x) = 1 - -j; I; e-t~ dt goes from [2 : 0]. This then is the key result: \nit gives the instantaneous probability of firing as a function of the instantaneous \nvoltage v. erfc is sigmoidal with a slope determined by O\"y, so a smaller noise yields \na steeper (more deterministic) transfer function; in the limit of 0 noise, the transfer \nfunction is a step and we recover a completely deterministic neuron. \n\nNote that Pt>, is actually an instantaneous function of v(t), not the stimulus itself \ns(t). If the noise is large compared with s(t) we must consider the distribution \ng$ (v, t; O\"y) of voltages reached in response to the input s(t): \n\nPy(t) \n\n(7) \n\n3 Ensemble of Signals \n\nWhat if the inputs s(t) are themselves drawn from an ensemble? If their distribution \nis also Gaussian and white with mean Jl and variance 0\";, and if the firing rate is \nlow (E[T] ~ T), then the output spike train is Poisson. Why is firing Poisson only \nin the slow firing limit? The reason is that, by assumption, immediately following \na spike the membrane potential resets to 0; it must then rise (assuming Jl > 0) to \nsome asymptotic level that is independent of the initial conditions. During this rise \nthe firing rate is lower than the asymptotic rate, because on average the membrane \nis farther from threshold, and its variance is lower. The rate at which the asymptote \nis achieved depends on T. In the limit as t ~ T, some asymptotic distribution of \nvoltage qoo(v), is attained. Note that if we make the reset Vo stochastic, with a \ndistribution given by qoo (v), then the firing probability would be the same even \nimmediately after spiking, and firing would be Poisson for all firing rates. \n\nA Poisson process is characterized by its mean alone. We therefore solve the FPE \n(eq. 2) for the steady-state by setting \u00b0tg(t, v) = 0 (we consider only threshold \ncrossings from initial values t ~ T; negYecting the early events results in only a \nsmall error, since we have assumed E{T} ~ T). Thus with the absorbing boundary \n\n\fWhen Is an Integrate-and-fire Neuron like a Poisson Neuron? \n\nat 0 the distribution at time t ~ T (given here for JJ = 0) is \n\ng~(Vj uy) = kl (1 - k2erfi [uyfi]) exp [~i:] , \n\n107 \n\n(8) \n\nwhere u; = u; + u~, erfi(z) = -ierf(iz), kl determines the normalization (the sign \nof kl determines whether the solution extends to positive or negative infinity) and \nk2 = l/erfi(O/(uy Vr)) is determined by the boundary. The instantaneous Poisson \nrate parameter is then obtained through eq. (7), \n\n(9) \n\nFig. 1 tests the validity of the exponential approximation. The top graph shows \nthe lSI distribution near the \"balance point\" , when the excitation is in balance with \nthe inhibition and the membrane potential hovers just subthreshold. The bottom \ncurves show the lSI distribution far below the balance point. In both cases, the \nexponential distribution provides a good approximation for t ~ T. \n\n4 Discussion \n\nThe main point of this paper is to make explicit the relation between the Poisson \nand integrate-and-fire models of neuronal acitivity. The key difference between \nthem is that the former is stochastic while the latter is deterministic. That is, given \nexactly the same stimulus, the Poisson neuron produces different spike trains on \ndifferent trials, while the integrate-and-fire neuron produces exactly the same spike \ntrain each time. It is therefore clear that if some degree of stochasticity is to be \nobtained in the integrate-and-fire model, it must arise from noise in the stimulus \nitself. \n\nThe relation we have derived here is purely formalj we have intentionally remained \nagnostic about the deep issues of what is signal and what is noise in the inputs to a \nneuron. We observe nevertheless that although we derive a limit (eq. 9) where the \nspike train of an integrate-and-fire neuron is a Poisson process-i.e. the probability \nof obtaining a spike in any interval is independent of obtaining a spike in any other \ninterval (except for very short intervals )-from the point of view of information \nprocessing it is a very different process from the purely stochastic rate-modulated \nPoisson neuron. In fact, in this limit the spike train is deterministically Poisson \nif u y = u., i. e. when n( t) = OJ in this case the output is a purely deterministic \nfunction of the input, but the lSI distribution is exponential. \n\n\f108 \n\nReferences \n\nC. F. STEVENS, A. ZADOR \n\nAmit, D. and Tsodyks, M. (1991). Quantitative study of attractor neural net(cid:173)\n\nwork retrieving at low spike rates. i. substrate-spikes, rates and neuronal gain. \nNetwork: Computation in Neural Systems, 2:259-273 . \n\nDeWeese, M. (1995). Optimization principles for the neural code. PhD thesis, Dept \n\nof Physics, Princeton University. \n\nDeWeese, M. (1996). Optimization principles for the neural code. In Hasselmo, \nM., editor, Advances in Neural Information Processing Systems, vol. 8. MIT \nPress, Cambridge, MA. \n\nShadlen, M. and Newsome, W. (1994) . Noise, neural codes and cortical organization. \n\nCurrent Opinion in Neurobiology, 4:569-579. \n\nShadlen, M. and Newsome, W. (1995) . Is there a signal in the noise? [comment]. \n\nCurrent Opinion in Neurobiology, 5:248-250. \n\nSnyder, D. and Miller, M. (1991). Random Point Processes in Time and Space, 2nd \n\nedition. Springer-Verlag. \n\nSoftky, W. (1995) . Simple codes versus efficient codes. Current Opinion in Neuro(cid:173)\n\nbiology, 5:239-247 . \n\nSoftky, W. and Koch, C. (1993). The highly irregular firing of cortical cells is \ninconsistent with temporal integration of random epsps. J. Neuroscience. , \n13:334-350. \n\nTuckwell, H. (1988). Introduction to theoretical neurobiology (2 vols.). Cambridge. \nUhlenbeck, G. and Ornstein, L. (1930). On the theory of brownian motion. Phys. \n\nRev., 36:823-84l. \n\nZador, A. M. and Pearlmutter, B. A. (1996) . VC dimension of an integrate and fire \n\nneuron model. Neural Computation, 8(3). In press. \n\n\fWhen Is an Integrate-and-fire Neuron like a Poisson Neuron? \n\n109 \n\nlSI distributions at balance point and the exponential limit \n\n0.02 \n\n0.015 \n\n.~ \n15 \n\n.8 e a. \n\n0.01 \n\n0.005 \n\n50 \n\n100 \n\n150 \n\n2 x 10-3 \n\n200 \n\n250 \n\nTime (msec) \n\n300 \n\n350 \n\n400 \n\n450 \n\n500 \n\n1.5 \n\n~ \n~ 1 \n0 ... a. \n\n.0 \n\n0.5 \n\n0 \n0 \n\n200 \n\n400 \n\n600 \n\n800 \n\n1000 \n\nlSI (msec) \n\n1200 \n\n1400 \n\n1600 \n\n1800 \n\n2000 \n\nFigure 1: lSI distributions. (A; top) lSI distribution for leaky integrate-and-fire \nmodel at the balance point, where the asymptotic membrane potential is just sub(cid:173)\nthreshold, for two values of the signal variance (1'2 . Increasing (1'2 shifts the distribu(cid:173)\ntion to the left . For the left curve, the parameters were chosen so that E{T} ~ T, \ngiving a nearly exponential distribution; for the right curve, the distribution would \nbe hard to distinguish experimentally from an exponential distribution with a re(cid:173)\nfractory period. (T = 50 msec; left: E{T} = 166 msec; right: E{T} = 57 msec). \n(B; bottom) In the subthreshold regime, the lSI distribution (solid} is nearly expo(cid:173)\nnential (dashed) for intervals greater than the membrane time constant. (T = 50 \nmsec; E{T} = 500 msec) \n\n\f", "award": [], "sourceid": 1057, "authors": [{"given_name": "Charles", "family_name": "Stevens", "institution": null}, {"given_name": "Anthony", "family_name": "Zador", "institution": null}]}