{"title": "Distributed Synchrony of Spiking Neurons in a Hebbian Cell Assembly", "book": "Advances in Neural Information Processing Systems", "page_first": 129, "page_last": 135, "abstract": null, "full_text": "Distributed Synchrony of Spiking Neurons \n\nin a Hebbian Cell Assembly \n\nDavid Horn Nir Levy \n\nSchool of Physics and Astronomy, \n\nRaymond and Beverly Sackler Faculty of Exact Sciences, \n\nTel Aviv University, Tel Aviv 69978, Israel \n\nhorn~neuron.tau.ac.il nirlevy~post.tau.ac.il \n\nIsaac Meilijson Eytan Ruppin \n\nSchool of Mathematical Sciences, \n\nRaymond and Beverly Sackler Faculty of Exact Sciences, \n\nTel Aviv University, Tel Aviv 69978, Israel \n\nisaco~math.tau.ac.il \n\nruppin~math.tau.ac.il \n\nAbstract \n\nWe investigate the behavior of a Hebbian cell assembly of spiking \nneurons formed via a temporal synaptic learning curve. This learn(cid:173)\ning function is based on recent experimental findings . It includes \npotentiation for short time delays between pre- and post-synaptic \nneuronal spiking, and depression for spiking events occuring in the \nreverse order. The coupling between the dynamics of the synaptic \nlearning and of the neuronal activation leads to interesting results. \nWe find that the cell assembly can fire asynchronously, but may \nalso function in complete synchrony, or in distributed synchrony. \nThe latter implies spontaneous division of the Hebbian cell assem(cid:173)\nbly into groups of cells that fire in a cyclic manner. We invetigate \nthe behavior of distributed synchrony both by simulations and by \nanalytic calculations of the resulting synaptic distributions. \n\n1 \n\nIntroduction \n\nThe Hebbian paradigm that serves as the basis for models of associative memory is \noften conceived as the statement that a group of excitatory neurons (the Hebbian \ncell assembly) that are coupled synaptically to one another fire together when a \nsubset of the group is being excited by an external input. Yet the details of the \ntemporal spiking patterns of neurons in such an assembly are still ill understood. \nTheoretically it seems quite obvious that there are two general types of behavior: \nsynchronous neuronal firing, and asynchrony where no temporal order exists in the \nassembly and the different neurons fire randomly but with the same overall rate. \nFurther subclassifications were recently suggested by [BruneI, 1999]. Experimen(cid:173)\ntally this question is far from being settled because evidence for the associative \n\n\f130 \n\nD. Hom, N. Levy, 1. Meilijson and E. Ruppin \n\nmemory paradigm is quite scarce. On one hand, one possible realization of associa(cid:173)\ntive memories in the brain was demonstrated by [Miyashita, 1988] in the inferotem(cid:173)\nporal cortex. This area was recently reinvestigated by [Yakovlev et al., 1998] who \ncompared their experimental results with a model of asynchronized spiking neurons. \nOn the other hand there exists experimental evidence [Abeles, 1982] for temporal \nactivity patterns in the frontal cortex that Abeles called synfire-chains. Could they \ncorrespond to an alternative type of synchronous realization of a memory attractor? \nTo answer these questions and study the possible realizations of attractors in \ncortical-like networks we investigate the temporal structure of an attractor assuming \nthe existence of a synaptic learning curve that is continuously applied to the mem(cid:173)\nory system. This learning curve is motivated by the experimental observations of \n[Markram et al., 1997, Zhang et al., 1998] that synaptic potentiation or depression \noccurs within a critical time window in which both pre- and post-synaptic neurons \nhave to fire. If the pre-synaptic neuron fires first within 30ms or so, potentiation \nwill take place. Depression is the rule for the reverse order. \nThe regulatory effects of such a synaptic learning curve on the synapses of \na single neuron that is subjected to external inputs were investigated by \n[Abbott and Song, 1999] and by [Kempter et al., 1999]. We investigate here the \neffect of such a rule within an assembly of neurons that are all excited by the \nsame external input throughout a training period, and are allowed to influence one \nanother through their resulting sustained activity. \n\n2 The Model \n\nWe study a network composed of N E excitatory and NJ inhibitory integrate-and-fire \nneurons. Each neuron in the network is described by its subthreshold membrane \npotential Vi{t) obeying \n\n. \nVi{t) = - - Vi{t) + R1i(t) \n\n1 \n\nTn \n\n(1) \n\nwhere Tn is the neuronal integration time constant. A spike is generated when Vi{t) \nreaches the threshold Vrest + fJ, upon which a refractory period of TRP is set on and \nthe membrane potential is reset to Vreset where Vrest < Vreset < Vrest + fJ. Ii{t) is \nthe sum of recurrent and external synaptic current inputs. The net synaptic input \ncharging the membrane of excitatory neuron i at time t is \n\nR1i(t) = L J~E{t) L 0 (t - t~ - Td) - L Ji~J L 0 (t - tj - Td) + r xt \n\n(2) \n\nj \n\nI \n\nj \n\nm \n\nsumming over the different synapses of j = 1, ... , NE excitatory neurons and of \nj = 1, ... ,NJ inhibitory neurons, with postsynaptic efficacies J~E{t) and Ji~J re(cid:173)\nspectively. The sum over 1 (m) represents a sum on different spikes arriving at \nsynapse j, at times t = t; + Td (t = tj + Td), where t~ (tj) is the emission time of \nthe l-th (m-th) spike from the excitatory (inhibitory) neuron j and Td is the synap(cid:173)\ntic delay. Iext, the external current, is assumed to be random and independent at \neach neuron and each time step, drawn from a Poisson distribution with mean A ext. \nAnalogously, the synaptic input to the inhibitory neuron i at time tis \n\nj \n\nj \n\nm \n\nWe assume full connectivity among the excitatory neurons, but only partial con(cid:173)\nnectivity between all other three types of possible connnections, with connection \n\n\fDistributed Synchrony of Spiking Neurons in a Hebbian Cell Assembly \n\n131 \n\nprobabilities denoted by eEl, e l E and Cl I. In the following we will report simula(cid:173)\ntion results in which the synaptic delays Td were assigned to each synapse, or pair of \nneurons, randomly, chosen from some finite set of values. Our analytic calculation \nwill be done for one fixed value of this delay parameter. \n\nThe synaptic efficacies between excitatory neurons are assumed to be potentiated \nor depressed according to the firing patterns of the pre- and post-synaptic neurons. \nIn addition we allow for a uniform synaptic decay. Thus each excitatory synapse \nobeys \n\n(4) \n\nwhere the synaptic decay constant Ts is assumed to be very large compared to the \nmembrane time constant Tn. J/JE(t) are constrained to vary in the range [0, Jma:~]. \nThe change in synaptic efficacy is defined by Fij (t), as \n\nFij(t) = L [6(t - t:)Kp(t; - t:) + 6(t - t;)KD(t; - t:)] \n\n(5) \n\nk ,l \n\nwhere Kp and KD are the potentiation and depression branches of the kernel func(cid:173)\ntion \n\nK(6) = -cO exp [- (a6 + b)2] \n\n(6) \n\nplotted in Figure 1. Following [Zhang et al., 1998] we distinguish between the sit(cid:173)\nuation where the postsynaptic spike, at t~, appears after or before the presynaptic \nspike, at t~, using the asymmetric kernel that captures the essence of their experi(cid:173)\nmental observations. \n\n... \n\n-o !.'--~-~~--'o-~-~~-----' \n\n/I =t'_tk \nI \n\n, \n\nFigure 1: The kernel function whose left part, Kp, leads to potentiation of the \nsynapse, and whose right branch, KD, causes synaptic depression. \n\n3 Distributed Synchrony of a Hebbian Assembly \n\nWe have run our system with synaptic delays chosen randomly to be either 1, 2, or \n3ms, and temporal parameters Tn chosen as 40ms for excitatory neurons and 20ms \nfor inhibitory ones. Turning external input currents off after a while we obtained \nsustained firing activities in the range of 100-150 Hz. We have found, in addition to \nsynchronous and asynchronous realizations of this attractor, a mode of distributed \nsynchrony. A characteristic example of a long cycle is shown in Figure 2: The 100 \nexcitatory neurons split into groups such that each group fires at the same frequency \nand at a fixed phase difference from any other group. The J/JE synaptic efficacies \n\n\f132 \n\nD. Horn, N Levy, 1. Meilijson and E. Ruppin \n\n: I : I : r I \n\n':~ \n:tI : I : r I \n':f I : r I \n':f r I \nj \n:1 \nj \n\n:1 \n: I ] \n: I : I :1 \n: 1 : 1 : 1 :1 \n: 1 : 1 : n \nI \n: I : I : I : r ~ \n\n, \n\nJO \n\n\" \n\n\" \n\nI \n\n\" \n\n\" \n\nFigure 2: Distributed synchronized firing mode. The firing patterns of six cell as(cid:173)\nsemblies of excitatory neurons are displayed vs time (in ms). These six groups of \nneurons formed in a self-organized manner for a kernel function with equal poten(cid:173)\ntiation and depression. The delays were chosen randomly from three values, 1 2 or \n3ms, and the system is monitored every 0.5ms . \n\nare initiated as small random values. The learning process leads to the self-organized \nsynaptic matrix displayed in Figure 3(a). The block form of this matrix represents \nthe ordered couplings that are responsible for the fact that each coherent group of \nneurons feeds the activity of groups that follow it. The self-organized groups form \nspontaneously. When the synapses are affected by some external noise, as can come \nabout from Hebbian learning in which these neurons are being coupled with other \npools of neurons, the groups will change and regroup, as seen in Figure 3(b) and \n3(c). \n\n(a) \n\n(b) \n\n(c) \n\nFigure 3: A synaptic matrix for n = 6 distributed synchrony. The synaptic matrix \nbetween the 100 excitatory neurons of our system is displayed in a grey-level code \nwith black meaning zero efficacy and white standing for the synaptic upper-bound. \n(a) The matrix that exists during the distributed synchronous mode of Figure 2. \nIts basis is ordered such that neurons that fire together are grouped together. (b) \nUsing the same basis as in (a) a new synaptic matrix is shown, one that is formed \nafter stopping the sustained activity of Figure 2, introducing noise in the synaptic \nmatrix, and reinstituting the original memory training. (c) The same matrix as (b) \nis shown in a new basis that exhibits connections that lead to a new and different \nrealization of distributed synchrony. \n\nA stable distributed synchrony cycle can be simply understood for the case of a \nsingle synaptic delay setting the basic step, or phase difference, of the cycle. When \nseveral delay parameters exist, a situation that probably more accurately represents \nthe a-function character of synaptic transmission in cortical networks, distributed \n\n\fDistributed Synchrony of Spiking Neurons in a Hebbian Cell Assembly \n\n133 \n\nsynchrony may still be obtained, as is evident from Figure 2. After some time \nthe cycle may destabilize and regrouping may occur by itself, without external \ninterference. The likelihood of this scenario is increased because different synaptic \nconnections that have different delays can interfere with one another. Nonetheless, \nover time scales of the type shown in Figure 2, grouping is stable. \n\n4 Analysis of a Cycle \n\nIn this section we analyze the dynamics of the network when it is in a stable state \nof distributed synchrony. We assume that n groups of neurons are formed and \ncalculate the stationary distribution of JffE(t) . In this state the firing pattern of \nevery two neurons in the network can be characterized by their frequency l/(t) and \nby their relative phase 8. We assume that 8 is a random normal variable with \nmean J.Lo and standard deviation 0'0 . Thus, Eq. 4 can be rewritten as the following \nstochastic differential equation \n\ndJi~E(t) = [J.LFij(t) - :s J!jE(t)] dt+O'Fij(t)dW(t) \n\n(7) \n\nwhere Fij (t) (Eq. 5) is represented here by a drift term J.LFij (t) and a diffusion \nterm O'Fij (t) which are its mean and standard deviation. W(t) describes a Wiener \nprocess. Note that both J.LFij (t) and O'Fij (t) are calculated for a specific distribution \nof 8 and are functions of J.Lo and 0'0. \nThe stochastic process that satisfies Eq. 7 will satisfy the Fokker-Plank equation \nfor the probability distribution f of JIfE, \n8f(JPlE t) \n\n1) \n\n8 [( \n\n.. (t) _ _ JPlE \n\n(8) \n\n] 0'2 \n' J ' + \n\nf(JPlE t) \n\n(t) 82f(JPlE t) \nFij \n2 \n\nt J ' \n8JEE2 \n\nij \n\n. J ' = ___ \n8JPlE \n8t \n\nJ.LF.] \n\n1J \n\nT ' J \n\nS \n\nwith reflecting boundary conditions imposed by the synaptic bounds, 0 and Jmax . \nSince we are interested in the stable state of the process we solve the stationary \nequation. The resulting density function is \n\n[1 ( \n\nEE 1 EE2) 1 \n\n- Ts Jij \n\n(9) \n\n(10) \n\nEE \n\nN \n\nf(Jij ,J.Lt5, 0'15) = O'}ij (t) exp O'}ij (t) \n\n2J.LFij Jij \n\nwhere \n\nEq. 9 enables us to calculate the stationary distribution of the synaptic efficacies \nbetween the presynaptic neuron i and the post-synaptic neuron j given their fre(cid:173)\nquency l/ and the parameters J.Lo and 0'15. An example of a solution for a 3-cycle is \nshown in Figure 4. In this case all neurons fire with frequency l/ = (3Td)-1 and J.Lt5 \ntakes one of the values -Td, 0, Td. \n\nSimulation results of a 3-cycle in a network of excitatory and inhibitory integrate(cid:173)\nand-fire neurons described in Section 2 are given in Figure 5. As can be seen the \nresults obtained from the analysis match those observed in the simulation. \n\n5 Discussion \n\ninteresting experimental observations of \n\nThe \n[Markram et al., 1997, Zhang et al., 1998] have led us to study their implica(cid:173)\ntions for the firing patterns of a Hebbian cell assembly. We find that, in addition \n\nsynaptic \n\nlearning \n\ncurves \n\n\f134 \n\nD. Horn, N. Levy, 1. Meilijson and E. Ruppin \n\n(a) \n\n(b) \n\n70 \n\n60 \n\nso \n\n40 \n\n30 \n\n20 \n\n, 0 \n\n0 \n\n0 \n\n) \n\n0.' \n\n0.2 \n\n0.3 \n\n0.4 \n\n0. 5 \n\nJEE \n\n\" \n\n01 \n\n0 .2 \n\n0.3 \n\n0 4 \n\nFigure 4: Results of the analysis for n = 3, a6 = 2ms and Td = 2.5ms. (a) The \nsynaptic matrix. Each of the nine blocks symbolizes a group of connections between \nneurons that have a common phase-lag J..l6 . The mean of Ji~E was calculated for \neach cell by Eq. 9 and its value is given by the gray scale tone. (b) The distribution \nof synaptic values between all excitatory neurons. \n\n(a) \n\n(b) \n\n5o0 0 , - - - - - - - - - - - - - , \n\n4500 \n\n4000 \n\n3500 \n\n3000 \n\n2500 \n\n2000 \n\n1500 \n\n500 \n\n0. ' \n\n0.2 \n\n0.3 \n\n0.4 \n\n0. 5 \n\no 0\" - - -0 .... \n\n' - - ' ... 0.2:--~0.3::----::\"\"0.4c--'\"-:\"0.5 \n\nJEE , \n\nFigure 5: Simulation results for a network of N E = 100 and NJ = 50 integrate(cid:173)\nand-fire neurons, when the network is in a stable n = 3 state. Tn = 10ms for both \nexcitatory and inhibitory neurons. The average frequency of the neurons is 130 Hz. \n(a) The excitatory synaptic matrix. (b) Histogram of the synaptic efficacies. \n\nto the expected synchronous and asynchronous modes, an interesting behavior \nof distributed synchrony can emerge. This is the phenomenon that we have \ninvestigated both by simulations and by analytic evaluation. \nDistributed synchrony is a mode in which the Hebbian cell assembly breaks into an \nn-cycle. This cycle is formed by instantaneous symmetry breaking, hence specific \nclassification of neurons into one of the n groups depends on initial conditions, noise, \netc. Thus the different groups of a single cycle do not have a semantic invariant \nmeaning of their own. It seems perhaps premature to try and identify these cycles \nwith synfire chains [Abeles, 1982] that show recurrence of firing patterns of groups \nof neurons with periods of hundreds of ms. Note however, that if we make such an \nidentification , it is a different explanation from the model of [Herrmann et al., 1995J , \nwhich realizes the synfire chain by combining sets of preexisting patterns into a cycle. \n\nThe simulations in Figures 2 and 3 were carried out with a learning curve that \npossessed equal potentiation and depression branches, i.e. was completely anti(cid:173)\nsymmetric in its argument. In that case no synaptic decay was allowed. Figure 5, \non the other hand, had stronger potentiation than depression, and a finite synaptic \n\n\fDistributed Synchrony of Spiking Neurons in a Hebbian Cell Assembly \n\n135 \n\ndecay time was assumed. Other conditions in these nets were different too, yet \nboth had a window of parameters where distributed synchrony showed up. Using \nthe analytic approach of section 4 we can derive the probability distribution of \nsynaptic values once a definite cyclic pattern of distributed synchrony is formed. \nAn analytic solution of the combined dynamics of both the synapses and the spiking \nneurons is still an open challenge. Hence we have to rely on the simulations to prove \nthat distributed synchrony is a natural spatiotemporal behavior that follows from \ncombined neuronal dynamics and synaptic learning as outlined in section 2. To the \nextent that both types of dynamics reflect correctly the dynamics of cortical neural \nnetworks, we may expect distributed synchrony to be a mode in which neuronal \nattractors are being realized. \n\nThe mode of distrbuted synchrony is of special significance to the field of neural com(cid:173)\nputation since it forms a bridge between the feedback and feed-forward paradigms. \nNote that whereas the attractor that is formed by the Hebbian cell assembly is of \nglobal feedback nature, i.e. one may regard all neurons of the assembly as being \nconnected to other neurons within the same assembly, the emerging structure of \ndistributed synchrony shows that it breaks down into groups. These groups are \nconnected to one another in a self-organized feed-forward manner, thus forming the \ncyclic behavior we have observed. \n\nReferences \n\n[Abbott and Song, 1999] L. F. Abbott and S. Song. Temporally asymmetric heb(cid:173)\n\nbian learning, spike timing and neuronal response variability. In M. S. Kearns, \nS. A. Solla, and D. A. Cohn, editors, Advances in Neural Information Processing \nSystems 11: Proceedings of the 1998 Conference, pages 69 - 75. MIT Press, 1999. \n\n[Abeles, 1982] M. Abeles. Local Cortical Circuits. Springer, Berlin, 1982. \n[BruneI, 1999] N. BruneI. Dynamics of sparsely connected networks of excitatory \n\nand inhibitory spiking neurons. Journal of Computational Neuroscience, 1999. \n\n[Herrmann et al., 1995] M. Herrmann, J . Hertz, and A. Prugel-Bennet. Analysis of \n\nsynfire chains. Network: Compo in Neural Systems, 6:403 - 414, 1995. \n\n[Kempter et al. , 1999] R. Kempter, W. Gerstner, and J . Leo van Hemmen. Spike(cid:173)\n\nbased compared to rate-based hebbian learning. In M. S. Kearns , S. A. Solla, \nand D. A. Cohn, editors, Advances in Neural Information Processing Systems 11: \nProceedings of the 1998 Conference, pages 125 - 131. MIT Press, 1999. \n\n[Markram et al., 1997] H. Markram, J. Lubke, M. Frotscher, and B. Sakmann. Reg(cid:173)\nulation of synaptic efficacy by coincidence of postsynaptic aps and epsps. Science, \n275(5297):213 - 215, 1997. \n\n[Miyashita, 1988] Y. Miyashita. Neuronal correlate of visual associative long-term \n\nmemory in the primate temporal cortex. Nature, 335:817 - 820, 1988. \n\n[Yakovlev et al., 1998] V. Yakovlev, S. Fusi, E . Berman, and E . Zohary. Inter-trial \n\nneuronal activity in inferior temporal cortex: a putative vehicle to generate long(cid:173)\nterm visual associations. Nature Neurosc ., 1(4) :310 - 317, 1998. \n\n[Zhang et al., 1998] L. I. Zhang, H. W. Tao, C. E. Holt, W . A. Harris, and M. Poo. \nA critical window for cooperation and competition among developing retinotectal \nsynapses. Nature, 395:37 - 44, 1998. \n\n\f", "award": [], "sourceid": 1703, "authors": [{"given_name": "David", "family_name": "Horn", "institution": null}, {"given_name": "Nir", "family_name": "Levy", "institution": null}, {"given_name": "Isaac", "family_name": "Meilijson", "institution": null}, {"given_name": "Eytan", "family_name": "Ruppin", "institution": null}]}