{"title": "Dynamical Constraints on Computing with Spike Timing in the Cortex", "book": "Advances in Neural Information Processing Systems", "page_first": 285, "page_last": 292, "abstract": null, "full_text": "Dynamical Constraints on Computing \n\nwith Spike Timing in the Cortex \n\nArunava Banerjee and Alexandre Pouget \nDepartment of Brain and Cognitive Sciences \n\nUniversity of Rochester, Rochester, New York 14627 \n\n{arunavab, alex} @bcs.rochester.edu \n\nAbstract \n\nIf the cortex uses spike timing to compute, the timing of the spikes \nmust be robust to perturbations. Based on a recent framework that \nprovides a simple criterion to determine whether a spike sequence \nproduced by a generic network is sensitive to initial conditions, and \nnumerical simulations of a variety of network architectures, we \nargue within the limits set by our model of the neuron, that it is \nunlikely that precise sequences of spike timings are used for \ncomputation under conditions typically found in the cortex. \n\n1 Introduction \n\nSeveral models of neural computation use the precise timing of spikes to encode \ninformation. For example, Abeles et al. have proposed synchronous volleys of \nspikes (synfire chains) as a candidate for representing information in the cortex [1]. \nMore recently, Maass has demonstrated how spike timing in general, not merely \nsynfire chains, can be utilized to perform nonlinear computations [6]. \n\nFor any of these schemes to function, the timing of the spikes must be robust to \nsmall perturbations; i.e., small perturbations of spike timing should not result in \nsuccessively larger fluctuations in the timing of subsequent spikes. To use the \nterminology of dynamical systems theory, the network must not exhibit sensitivity \nto initial conditions. Indeed, reliable computation would simply be impossible if the \ntiming of spikes is sensitive to the slightest source of noise, such as synaptic release \nvariability, or thermal fluctuations in the opening and closing of ionic channels. \n\nDiesmann et al. have recently examined this issue for the particular case of synfire \nchains in feed-forward networks [4]. They have demonstrated that the propagation \nof a synfire chain over several layers of integrate-and-fire neurons can be robust to 2 \nHz of random background activity and to a small amount of noise in the spike \ntimings. The question we investigate here is whether this result generalizes to the \npropagation of any arbitrary spatiotemporal configuration of spikes through a \nrecurrent network of neurons. This question is central to any theory of computation \nin cortical networks using spike timing since it is well known that the connectivity \nbetween neurons in the cortex is highly recurrent. Although there have been earlier \nattempts at resolving like issues, the applicability of the results are limited by the \nmodel of the neuron [8] or the pattern of propagated spikes [5] considered. \n\n\fBefore we can address this question in a principled manner, however, we must \nconfront a couple of confounding issues. First stands the problem of stationarity. As \nis well known, Lyapunov characteristic exponents of trajectories are limit quantities \nthat are guaranteed to exist (almost surely) in classical dynamical systems that are \nstationary. In systems such as the cortex that receive a constant barrage of transient \ninputs, it is questionable whether such a concept bears much relevance. Fortunately, \nour simulations indicate that convergence or divergence of trajectories in cortical \nnetworks can occur very rapidly (within 200-300 msec). Assuming that external \ninputs do not change drastically over such short time scales, one can reasonably \napply the results from analysis under stationary conditions to such systems. \n\nSecond, the issues of how a network should be constructed so as to generate a \nparticular spatiotemporal pattern of spikes as well as whether a given spatiotemporal \npattern of spikes can be generated in principle, remain unresolved in the general \nsetting. It might be argued that without such knowledge, any classification of spike \npatterns into sensitive and insensitive classes is inherently incomplete. However, as \nshall be demonstrated later, sensitivity to initial conditions can be inferred under \nrelatively weak conditions. In addition, we shall present simulation results from a \nvariety of network architectures to support our general conclusions. \n\nThe remainder of the paper is organized as follows. In section 2, we briefly review \nrelevant aspects of the dynamical system corresponding to a recurrent neuronal \nnetwork as formulated in [2] and formally define \"sensitivity to initial conditions\". \nIn Section 3, we present simulation results from a variety of network architectures. \nIn Section 4, we interpret these results formally which in turn lead us to an \nadditional set of experiments. In Section 5, we draw conclusions regarding the issue \nof computation using spike timing in cortical networks based on these results. \n\n2 Spike dynamics \n\nA detailed exposition of an abstract dynamical system that models recurrent systems \nof biological neurons was presented in [2]. Here, we recount those aspects of the \nsystem that are relevant to the present discussion. Based on the intrinsic nature of \nthe processes involved in the generation of postsynaptic potentials (PSP's) and of \nthose involved in the generation of action potentials (spikes), it was shown that the \nstate of a system of neurons can be specified by enumerating the temporal positions \nof all spikes generated in the system over a bounded past. For example, in Figure 1, \nthe present state of the system is described by the positions of the spikes (solid \nlines) in the shaded region at t= 0 and the state of the system at a future time T is \nspecified by the positions of the spikes (solid lines) in the shaded region at t= T. \nEach internal neuron i in the system is assigned a membrane potential function PJ) \nthat takes as its input the present state and generates the instantaneous potential at \nthe soma of neuron i. It is the particular instantiation of the set of functions PJ) that \ndetermines the nature of the neurons as well as their connectivity in the network. \n\nConsider now the network in Figure 1 initialized at the particular state described by \nthe shaded region at t= O. Whenever the integration of the PSP's from all presynaptic \nspikes to a neuron combined with the hyperpolarizing effects of its own spikes (the \nprecise nature of the union specified by PJ)) brings its membrane potential above \nthreshold, the neuron emits a new spike. If the spikes in the shaded region at t= 0 \nwere perturbed in time ( dotted lines), this would result in a perturbation on the new \nspike. The size of the new perturbation would depend upon the positions of the \nspikes in the shaded region, the nature of PJ) , and the sizes of the old \nperturbations. This scenario would in turn repeat to produce further perturbations on \nfuture spikes. In essence, any initial set of perturbations would propagate from spike \nto spike to produce a set of perturbations at any arbitrary future time t= T. \n\n\f: \nI: \n\nI: \n: I I : \n: I \n\nI \nI \n\nI \nI \n\n: I \n: I \n\nI \nI \n\nI: \nI: \n\nI \nI \n\n: \n\n: I \n: I \n\nI: \nI : \n\nI \nI \n\nI: \nI: \n\nI \nI \n\n: \n\nI \nI \n\n: I \n: \nI \n\nPa st \n\n1==0 \n\nI==T Future \n\nFigure 1: Schematic diagram of the spike dynamics of a system of neurons. \nInput neurons are colored gray and internal neurons black. Spikes are shown \nin solid lines and their corresponding perturbations in dotted lines. Note that \nspikes generated by the input neurons are not perturbed. Gray boxes \ndemarcate a bounded past history starting at time t. The temporal position of \nall spikes in the boxes specify the state of the system at times t= 0 and t= T. \n\nIt is of considerable importance to note at this juncture that while the specification \nof the network architecture and the synaptic weights determine the precise temporal \nsequence of spikes generated by the network, the relative size of successive \nperturbations are determined by the temporal positions of the spikes in successive \nstate descriptions at the instant of the generation of each new spike. If it can be \ndemonstrated that there are particular classes of state descriptions that lead to large \nrelative perturbations, one can deduce the qualitative aspects of the dynamics of a \nnetwork armed with only a general description of its architecture. A formal analysis \nin Section 4 will bring to light such a classification. \nLet column vectors ~ and y denote, respectively, perturbations on the spikes of \ninternal neurons at times t=O and t=T. We pad each vector with as many zeroes as \nthere are input spikes in the respective state descriptions. Let AT denote the matrix \nsuch that y = Ar~. Let Band C be the matrices as described in [3] that discard the \nrigid translational components from the final and initial perturbations. Then, the \ndynamics of the system is sensitive to initial conditions if lim T_ oo liB * AT * ell = 00 . \nIf instead, lim T_ oo liB * AT * ell = 0 , the dynamics is insensitive to initial conditions. \n\nA few comments are in order here. First, our interest lies not in the precise values of \nthe Lyapunov characteristic exponents of trajectories (where they exist), but in \nwhether the largest exponent is greater than or less than zero. Furthermore, the class \nof trajectories that satisfy either of the above criteria is larger (although not \nnecessarily in measure) than the class of trajectories that have definite exponents. \nSecond, input spikes are free parameters that have to be constrained in some manner \nif the above criteria are to be well-defined. By the same token, we do not consider \nthe effects that perturbations of input spikes have on the dynamics of the system. \n\n3 Simulations and results \n\nA typical column in the cortex contains on the order of 10 5 neurons, approximately \n80% of which are excitatory and the rest inhibitory. Each neuron receives around \n104 synapses, approximately half of which are from neurons in the same column and \nthe rest from excitatory neurons in other columns and the thalamus. These estimates \nindicate that even at background rates as low as 0.1 Hz, a column generates on \naverage 10 spikes every millisecond. Since perturbations are propagated from spikes \n\n\fto generated spikes, divergence and/or convergence of spike trajectories could occur \nextremely rapidly. We test this hypothesis in this section through model simulations. \n\nAll experiments reported here were conducted on a system containing 1000 internal \nneurons (set to model a cortical column) and 800 excitatory input neurons (set to \nmodel the input into the column). Of the 1000 internal neurons, 80% were chosen to \nbe excitatory and the rest inhibitory. Each internal neuron received 100 synapses \nfrom other (internal as well as input) neurons in the system. The input neurons were \nset to generate random uncorrelated Poisson spike trains at a fixed rate of 5 Hz. \n\nThe membrane potential function P/) for each internal neuron was modeled as the \nsum of excitatory and inhibitory PSP ' s triggered by the arrival of spikes at synapses, \nand afterhyperpolarization potentials triggered by the spikes generated by the \nneuron. PSP ' s were modeled using the function \"'.Ji e-\"'i e-Y, where v, E and Twere set \n\nv \n\nt \n\nto mimic four kinds of synapses, NMDA, AMP A, GABA A , and GABA B . OJ was set \nfor excitatory and inhibitory synapses so as to generate a mean spike rate of 5 Hz by \nexcitatory and 15 Hz by inhibitory internal neurons. The parameters were then held \nconstant over the entire system leaving the network connectivity and axonal delays \nas the only free parameters. After the generation of a spike, an absolute refractory \nperiod of 1 msec was introduced during which the neuron was prohibited from \ngenerating a spike. There was no voltage reset. However, each spike triggered an \nafterhyperpolarization potential with a decay constant of 30 msec that led to a \nrelative refractory period. Simulations were performed in 0.1 msec time steps and \nthe time bound on the state description, as related in Section 2, was set at 200 msec. \n\nThe issue of correlated inputs was addressed by simulating networks of disparate \narchitectures. On the one extreme was an ordered two layer ring network with input \nneurons forming the lower layer and internal neurons (with the inhibitory neurons \nplaced evenly among the excitatory neurons) forming the upper layer. Each internal \nneuron received inputs from a sector of internal and input neurons that was centered \non that neuron. As a result, any two neighboring internal neurons shared 96 of their \n100 inputs (albeit with different axonal delays of 0.5-1.1 msec). This had the effect \nof output spike trains from neighboring internal neurons being highly correlated, \nwith sectors of internal neurons producing synchronized bursts of spikes. On the \nother extreme was a network where each internal neuron received inputs from 100 \nrandomly chosen neurons from the entire population of internal and input neurons. \nSeveral other networks where neighboring internal neurons shared an intermediate \npercentage of their inputs were also simulated. Here, we present results from the \ntwo extreme architectures. The results from all the other networks were similar. \n\nFigure 2(a) displays sample output spike trains from 100 neighboring internal \nneurons over a period of 450 msec for both architectures. In the first set of \nexperiments, pairs of identical systems driven by identical inputs and initialized at \nidentical states except for one randomly chosen spike that was perturbed by 1 msec, \nwere simulated. In all cases, the spike trajectories diverged very rapidly. Figure 2(b) \npresents spike trains generated by the same 100 neighboring internal neurons from \nthe two simulations from 200 to 400 msec after initialization, for both architectures. \n\nTo further explore the sensitivity of the spike trajectories, we partitioned each \ntrajectory into segments of 500 spike generations each. For each such segment, we \nthen extracted the spectral norm liB * AT * ell after every 100 spike generations. \nFigure 2( c) presents the outcome of this analysis for both architectures. Although \nsuccessive segments of 500 spike generations were found to be quite variable in \ntheir absolute sensitivity, each such segment was nevertheless found to be sensitive. \nWe also simulated several other architectures (results not shown), such as systems \nwith fixed axonal delays and ones with bursty behavior, with similar outcomes. \n\n\f(a) \n\n\" : . \n\n\u2022 . '. \n\no msec \n\nRing Network (above) and Random Network (below) \n\n450 msec \n\n(b) \n\n, \n\n'.~ \n.:~ \n., \n\n', ' \n\n. : \n\n,\", \n\n200 msec \n\n(c) \n\n400 msec \n\n200 msec \n\n400 msec \n\nRing Network \n\nRandom Network \n\nlO',.-----~~-~--~--~--__, \n\n103 r--~~-~--~--~-----' \n\n200 \n\n300 \n\n400 \n\n500 \n\n400 \n\n500 \n\nFigure 2: (a) Spike trains of 100 neighboring neurons for 450 msec from the \nring and the random networks respectively. (b) Spike trains from the same \n100 neighboring neurons (above and below) 200 msec after initialization. \nNote that the trains have already diverged at 200 msec. (c) Spectral norm of \nsensitivity matrices of 14 successive segments of 500 spike generations \neach, computed in steps of 100 spike generations for both architectures. \n\n\f4 Analysis and further simulations \n\nThe reasons behind the divergence of the spike trajectories presented in Section 3 \ncan be found by considering how perturbations are propagated from the set of spikes \nin the current state description to a newly generated spike. As shown in [3] , the \nperturbation in the new spike can be represented as a weighted sum of the \nperturbations of those spikes in the state description that contribute to the generation \nof the new spike. The weight assigned to a spike Xi is proportional to the slope of the \nPSP or that of the hyperpolarization triggered by that spike (apo/aXi in the general \ncase), at the instant of the generation of the new spike. Intuitively, the larger the \nslope is, the greater is the effect that a perturbation of that spike can have on the \ntotal potential at the soma, and hence, the larger is the perturbation on the new \nspike. The proportionality constant is set so that the weights sum to 1. This \nconstraint is reflected in the fact that if all spikes were to be perturbed by a fixed \nquantity, this would amount to a rigid displacement in time causing the new spike to \nbe perturbed by the same quantity. We denote the slopes by Pi, and the weights by \nai. Then, a = p.I\" n p., where j ranges over all contributing spikes. \n\ni \n\nI ~ j\"\", l \n\nJ \n\nWe now assume that at the generation of each new spike, the p,'s are drawn \nindependently from a stationary distribution (for both internal and input contributing \nspikes), and that the ratio of the number of internal to the total (internal plus input) \nspikes in any state description remains close to a fixed quantity f-l at all times. Note \nthat this amounts to an assumed probability distribution on the likelihood of \nparticular spike trajectories rather than one on possible network architectures and \nsynaptic weights. The iterative construction of the matrix AT, based on these \nconditions, was described in detail in [3]. It was also shown that the statistic \n\\I;I~l a i\n2 ) plays a central role in the determination of the sensitivity of the resultant \nspike trajectories. In a minor modification to the analysis in [3], we assume that AT \nrepresents the full perturbation (internal plus input) at each step of the process. \nWhile this merely entails the introduction of additional rows with zero entries to \naccount for input spikes in each state, this alters the effect that B has on liB * AT * ell \nin a way that allows for a simpler as well as bidirectional bound on the norm. Since \nthe analysis is identical to that in [3] and does not introduce any new techniques, we \nonly report the result. If \\I;I~l a i\n2 ) < ~ -I} then the \nspike trajectories are almost surely sensitive (resp. insensitive) to initial conditions. \nm denotes the number of internal spikes in the state description. \n\n2 ) > (2 + ~(y\") -1 (resp. \\I;~l a i\n\nIf we make the liberal assumption that input spikes account for as much as half the \ntotal number of spikes in state descriptions, noting that m is a very large quantity \n(greater than 103 in all our simulations), the above constraint requires (Ian> 3 for \nspike trajectories to be almost surely sensitive to initial conditions. From our earlier \nsimulations, we extracted the value of L a i\n2 whenever a spike was generated, and \ncomputed the sample mean (I a i\n2 ) over all spike generations. The mean was larger \nthan 3 in all cases (it was 69.6 for the ring and 11.3 for the random network). \n\nThe above criterion enables us to peer into the nature of the spike dynamics of real \ncortical columns, for although simulating an entire column remains intractable, a \nsingle neuron can be simulated under various input scenarios, and the resultant \nstatistic applied to infer the nature of the spike dynamics of a cortical column most \nof whose neurons operate under those conditions. \n\n\fAn examination of the mathematical nature of L: a i\n2 reveals that its value rises as \nthe size of the subset of p;'s that are negative grows larger. The criterion for \nsensitivity is therefore more likely to be met when a substantial portion of the \nexcitatory PSP's are on their falling phase (and inhibitory PSP ' s on their rising \nphase) at the instant of the generation of each new spike. This corresponds to a case \nwhere the inputs into the neurons of a system are not strongly synchronized. \nConversely, if spikes are generated soon after the arrival of a synchronized burst of \nspikes (all of whose excitatory PSP ' s are presumably on their rising phase), the \ncriterion for sensitivity is less likely to be met. We simulated several combinations \nof the two \nidentify cases where the corresponding spike \ntrajectories in the system were not likely to be sensitive to initial conditions. \n\ninput scenarios to \n\nWe constructed a model pyramidal neuron with 10,000 synapses, 85% of which \nwere chosen to be excitatory and the rest inhibitory. The threshold of the neuron \nwas set at 15 mV above resting potential. PSP ' s were modeled using the function \ndescribed earlier with values for the parameters set to fit the data reported in [7]. \nFor excitatory PSP's the peak amplitudes ranged between 0.045 and 1.2 mV with \nthe median around 0.15 mY, 10-90 rise times ranged from 0.75 to 3.35 msec and \nwidths at half amplitude ranged from 8.15 to 18.5 msec. For inhibitory PSP's, the \npeak amplitudes were on average twice as large and the 10-90 rise times and widths \nat half amplitude were slightly larger. Whenever the neuron generated a new spike, \nthe values of the p;'s were recorded and L: a } was computed. The mean (L: a i\n2 ) \nwas then computed over the set of all spike generations. In order to generate \nconservative estimates, samples with value above 104 were discarded (they \ncomprised about 0.1% of the data). The datasets ranged in size from 3000 to 15,000. \n\nThree experiments simulating various levels of uncorrelated input/output activity \nwere conducted. In particular, excitatory Poisson inputs at 2, 20 and 40 Hz were \nbalanced by inhibitory Poisson inputs at 6.3, 63 and 124 Hz to generate output rates \nof approximately 2, 20 and 40 Hz, respectively. We confirmed that the output in all \nthree cases was Poisson-like (CV=O.77, 0.74, and 0.89, respectively). The mean \n(L: a i\n\n2 ) for the three experiments was 4.37 , 5.66, and 9.52 , respectively. \n\nNext, two sets of experiments simulating the arrival of regularly spaced synfire \nchains were conducted. In the first set the random background activity was set at 2 \nHz and in the second, at 20 Hz. The synfire chains comprised of spike volleys that \narrived every 50 msec. Four experiments were conducted within each set: volleys \nwere composed of either 100 or 200 spikes (producing jolts of around 10 and 20 mV \nrespectively) that were either fully synchronized or were dispersed over a Gaussian \ndistribution with a=1 msec. The mean (Lan for the experiments was as follows. \nAt 2 Hz background activity, it was 0.49 (200 spikes/volley, synchronized), 0.60 \n(200 spikes/volley, dispersed), 2.46 (100 spikes/volley, synchronized), and 2.16 \n(100 spikes/volley, dispersed). At 20 Hz background activity, it was 4.39 (200 \nspikes/volley, synchronized), 8.32 (200 spikes/volley, dispersed), 6.77 \n(100 \nspikes/volley, synchronized), and 6.78 (l00 spikes/volley, dispersed). \n\nFinally, two sets of experiments simulating the arrival of randomly spaced synfire \nchains were conducted. In the first set the random background activity was set at 2 \nHz and in the second, at 20 Hz. The synfire chains comprised of a sequence of spike \nvolleys that arrived randomly at a rate of 20 Hz. Two experiments were conducted \nwithin each set: volleys were composed of either 100 or 200 synchronized spikes. \nThe mean (L a i\n2 ) for the experiments was as follows. At 2 Hz background activity, \nit was 4.30 (200 spikes/volley) and 4.64 (100 spikes/volley). At 20 Hz background \nactivity, it was 5.24 (200 spikes/volley) and 6.28 (l00 spikes/volley). \n\n\f5 Conclusion \n\nAs was demonstrated in Section 3, senslllvlty to initial conditions transcends \nunstructured connectivity in systems of spiking neurons. Indeed, our simulations \nindicate that sensitivity is more the rule than the exception in systems modeling \ncortical networks operating at low to moderate levels of activity. Since perturbations \nare propagated from spike to spike, trajectories that are sensitive can diverge very \nrapidly in systems that generate a large number of spikes within a short period of \ntime. Sensitivity therefore is an issue, even for schemes based on precise sequences \nof spike timing with computation occurring over short (hundreds of msec) intervals. \n\nWithin the limits set by our model of the neuron, we have found that spike \ntrajectories are likely to be sensitive to initial conditions in all scenarios except \nwhere large (100-200) synchronized bursts of spikes occur in the presence of sparse \nbackground activity (2 Hz) with sufficient but not too large an interval between \nsuccessive bursts (50 msec). This severely restricts the possible use of precise spike \nsequences for reliable computation in cortical networks for at least two reasons. \nFirst, un synchronized activity can rise well above 2 Hz in the cortex, and second, \nthe highly constrained nature of this dynamics would show in in vivo recordings. \n\nAlthough cortical neurons can have vastly more complex responses than that \nmodeled in this paper, our conclusions are based largely on the simplicity and the \ngenerality of the constraints identified (the analysis assumes a general membrane \npotential function PO). Although a more refined model of the cortical neuron could \nlead to different values of the statistic computed, we believe that the results are \nunlikely to cross the noted bounds and therefore change our overall conclusions. \n\nWe are however not arguing that computation with spike timing is impossible in \ngeneral. There are neural structures, such as the nucleus laminaris in the barn owl \nand the electro sensory array in the electric fish , which have been shown to perform \nexquisitely precise computations using spike timing. Interestingly, these structures \nhave very specialized neurons and network architectures. \n\nTo conclude, computation using precise spike sequences does not appear to be likely \nin the cortex in the presence of Poisson-like activity at levels typically found there. \n\nReferences \n\n[1] Abeles, M., Bergman, H., Margalit, E. & Vaadia, E. (1993) Spatiotemporal firing patterns \nin the frontal cortex of behaving monkeys. Journal of Neurophysiology 70, pp. 1629-1638. \n\n[2] Banerjee, A. (2001) On the phase-space dynamics of systems of spiking neurons: I. \nmodel and experiments. Neural Computation 13, pp. 161-193. \n\n[3] Banerjee, A. (2001) On the phase-space dynamics of systems of spiking neurons: II. \nformal analysis. Neural Computation 13, pp. 195-225. \n\n[4] Diesmann, M. , Gewaltig, M. O. & Aertsen, A. (1999) Stable propagation of synchronous \nspiking in cortical neural networks. Nature 402, pp. 529-533. \n\n[5] Gerstner, W. , van Hemmen, J. L. & Cowan, J. D. (1996) What matters in neuronal \nlocking. Neural Computation 8, pp. 1689-1712. \n\n[6] Maass , W. (1995) On the computational complexity of networks of spiking neurons. \nAdvances in Neural Information Processing Systems 7, pp. 183-190. \n\n[7] Mason, A. , Nicoll, A. & Stratford, K. (1991) Synaptic transmission between individual \npyramidal neurons of the rat visual cortex in vitro . Journal of Neuroscience 11(1), pp. 72-84. \n[8] van Vreeswijk, c., & Sompolinsky, H. (1998) Chaotic balanced state in a model of \ncortical circuits. Neural Computation 10, pp. 1321-1372. \n\n\f", "award": [], "sourceid": 2337, "authors": [{"given_name": "Arunava", "family_name": "Banerjee", "institution": null}, {"given_name": "Alexandre", "family_name": "Pouget", "institution": null}]}