{"title": "Spiking Inputs to a Winner-take-all Network", "book": "Advances in Neural Information Processing Systems", "page_first": 1051, "page_last": 1058, "abstract": "", "full_text": "Spiking Inputs to a Winner-take-all Network\n\nMatthias Oster and Shih-Chii Liu\n\nInstitute of Neuroinformatics\n\nCH-8057 Zurich, Switzerland\n\n{mao,shih}@ini.phys.ethz.ch\n\nUniversity of Zurich and ETH Zurich\n\nWinterthurerstrasse 190\n\nAbstract\n\nRecurrent networks that perform a winner-take-all computation have\nbeen studied extensively. Although some of these studies include spik-\ning networks, they consider only analog input rates. We present results\nof this winner-take-all computation on a network of integrate-and-\ufb01re\nneurons which receives spike trains as inputs. We show how we can con-\n\ufb01gure the connectivity in the network so that the winner is selected after\na pre-determined number of input spikes. We discuss spiking inputs with\nboth regular frequencies and Poisson-distributed rates. The robustness of\nthe computation was tested by implementing the winner-take-all network\non an analog VLSI array of 64 integrate-and-\ufb01re neurons which have an\ninnate variance in their operating parameters.\n\n1 Introduction\n\nRecurrent networks that perform a winner-take-all computation are of great interest be-\ncause of the computational power they offer. They have been used in modelling attention\nand recognition processes in cortex [Itti et al., 1998,Lee et al., 1999] and are thought to be a\nbasic building block of the cortical microcircuit [Douglas and Martin, 2004]. Descriptions\nof theoretical spike-based models [Jin and Seung, 2002] and analog VLSI (aVLSI) imple-\nmentations of both spike and non-spike models [Lazzaro et al., 1989, Indiveri, 2000, Hahn-\nloser et al., 2000] can be found in the literature. Although the competition mechanism in\nthese models uses spike signals, they usually consider the external input to the network to\nbe either an analog input current or an analog value that represents the spike rate.\n\nWe describe the operation and connectivity of a winner-take-all network that receives input\nspikes. We consider the case of the hard winner-take-all mode, where only the winning\nneuron is active and all other neurons are suppressed. We discuss a scheme for setting the\nexcitatory and inhibitory weights of the network so that the winner which receives input\nwith the shortest inter-spike interval is selected after a pre-determined number of input\nspikes. The winner can be selected with as few as two input spikes, making the selection\nprocess fast [Jin and Seung, 2002].\n\nWe tested this computation on an aVLSI chip with 64 integrate-and-\ufb01re neurons and various\ndynamic excitatory and inhibitory synapses. The distribution of mismatch (or variance) in\nthe operating parameters of the neurons and synapses has been reduced using a spike coding\n\n\f(a)\n\n(b)\n\nFigure 1: Connectivity of the winner-take-all network: (a) in biological networks, inhibi-\ntion is mediated by populations of global inhibitory interneurons (\ufb01lled circle). To perform\na winner-take-all operation, they are driven by excitatory neurons (un\ufb01lled circles) and\nin return, they inhibit all excitatory neurons (black arrows: excitatory connections; dark\narrows: inhibitory). (b) Network model in which the global inhibitory interneuron is re-\nplaced by full inhibitory connectivity of ef\ufb01cacy VI. Self excitation of synaptic ef\ufb01cacy\nVself stabilizes the selection of the winning neuron.\n\nmismatch compensation procedure described in [Oster and Liu, 2004]. The results shown\nin Section 3 of this paper were obtained with a network that has been calibrated so that\nthe neurons have about 10% variance in their \ufb01ring rates in response to a common input\ncurrent.\n\n1.1 Connectivity\n\nWe assume a network of integrate-and-\ufb01re neurons that receive external excitatory or in-\nhibitory spiking input. In biological networks, inhibition between these array neurons is\nmediated by populations of global inhibitory interneurons (Fig. 1a). They are driven by\nthe excitatory neurons and inhibit them in return. In our model, we assume the forward\nconnections between the excitatory and the inhibitory neurons to be strong, so that each\nspike of an excitatory neuron triggers a spike in the global inhibitory neurons. The strength\nof the total inhibition between the array neurons is adjusted by tuning the backward con-\nnections from the global inhibitory neurons to the array neurons. This con\ufb01guration allows\nthe fastest spreading of inhibition through the network and is consistent with \ufb01ndings that\ninhibitory interneurons tend to \ufb01re at high frequencies.\n\nWith this con\ufb01guration, we can simplify the network by replacing the global inhibitory\ninterneurons with full inhibitory connectivity between the array neurons (Fig. 1b). In ad-\ndition, each neuron has a self-excitatory connection that facilitates the selection of this\nneuron as winner for repeated input.\n\n2 Network Connectivity Constraints for a Winner-Take-All Mode\n\nWe \ufb01rst discuss the conditions for the connectivity under which the network operates in\na hard winner-take-all mode. For this analysis, we assume that the neurons receive spike\ntrains of regular frequency. We also assume the neurons to be non-leaky.\nThe membrane potentials Vi, i = 1 . . . N then satisfy the equation of a non-leaky integrate-\n\nVSelfVI\fFigure 2: Membrane potential of the winning neuron k (a) and another neuron in the array\n(b). Black bars show the times of input spikes. Traces show the changes in the membrane\nmembrane potential caused by the various synaptic inputs. Black dots show the times of\noutput spikes of neuron k.\n\nand-\ufb01re neuron model with non-conductance-based synapses:\n\nX\n\ndVi\ndt\n\n= VE\n\nNX\n\nX\n\n\u03b4(t \u2212 t(n)\n\ni\n\n) \u2212 VI\n\n\u03b4(t \u2212 s(m)\n\nj\n\n)\n\n(1)\n\nn\n\nj=1\nj6=i\n\nm\n\nThe membrane resting potential is set to 0. Each neuron receives external excitatory input\nand inhibitory connections from all other neurons. All inputs to a neuron are spikes and\nits output is also transmitted as spikes to other neurons. We neglect the dynamics of the\nsynaptic currents and the delay in the transmission of the spikes. Each input spike causes a\n\ufb01xed discontinuous jump in the membrane potential (VE for the excitatory synapse and VI\nfor the inhibitory). Each neuron i spikes when Vi\u2265 Vth and is reset to Vi =0. Immediately\nafterwards, it receives a self-excitation of weight Vself . All potentials satisfy 0 \u2264 Vi \u2264 Vth,\nthat is, an inhibitory spike can not drive the membrane potential below ground. All neurons\ni \u2208 1 . . . N, i6=k receive excitatory input spike trains of constant frequency ri. Neuron k\nreceives the highest input frequency (rk > ri \u2200 i6=k).\nAs soon as neuron k spikes once, it has won the computation. Depending on the ini-\ntial conditions, other neurons can at most have transient spikes before the \ufb01rst spike of\nneuron k. For this hard winner-take-all mode, the network has to ful\ufb01ll the following con-\nstraints (Fig. 2):\n\n(a) Neuron k (the winning neuron) spikes after receiving nk = n input spikes that cause\nits membrane potential to exceed threshold. After every spike, the neuron is reset to Vself :\n\nVself + nkVE \u2265 Vth\n\n(2)\n(b) As soon as neuron k spikes once, no other neuron i6= k can spike because it receives\nan inhibitory spike from neuron k. Another neuron can receive up to n spikes even if its\ninput spike frequency is lower than that of neuron k because the neuron is reset to Vself\n\nVselfVEVEVthVthVEVEVIVI(a)(b)\fni \u00b7 VE \u2264 nk \u00b7 VE \u2264 VI\n\nafter a spike, as illustrated in Figure 2. The resulting membrane voltage has to be smaller\nthan before:\n\n(3)\n(c) If a neuron j other than neuron k spikes in the beginning, there will be some time\nin the future when neuron k spikes and becomes the winning neuron. From then on, the\nconditions (a) and (b) hold, so a neuron j6= k can at most have a few transient spikes.\nLet us assume that neurons j and k spike with almost the same frequency (but rk > rj).\nFor the inter-spike intervals \u2206i=1/ri this means \u2206j >\u2206k. Since the spike trains are not\nsynchronized, an input spike to neuron k has a changing phase offset \u03c6 from an input spike\nof neuron j. At every output spike of neuron j, this phase decreases by \u2206\u03c6 = nk(\u2206j\u2212\u2206k)\nuntil \u03c6 < nk(\u2206j\u2212\u2206k). When this happens, neuron k receives (nk+1) input spikes before\nneuron j spikes again and crosses threshold:\n\n(nk + 1) \u00b7 VE \u2265 Vth\n\n(4)\n\nWe can choose Vself = VE and VI = Vth to ful\ufb01ll the inequalities (2)-(4). VE is adjusted to\nachieve the desired nk.\nCase (c) happens only under certain initial conditions, for example when Vk (cid:28) Vj or\nwhen neuron j initially received a spike train of higher frequency than neuron k. A leaky\nintegrate-and-\ufb01re model will ensure that all membrane potentials are discharged (Vi =0) at\nthe onset of a stimulus. The network will then select the winning neuron after receiving a\npre-determined number of input spikes and this winner will have the \ufb01rst output spike.\n\n2.1 Poisson-Distributed Inputs\n\nIn the case of Poisson-distributed spiking inputs, there is a probability associated with the\ncorrect winner being selected. This probability depends on the Poisson rate \u03bd and the\nnumber of spikes needed for the neuron to reach threshold n. The probability that m input\nspikes arrive at a neuron in the period T is given by the Poisson distribution\n\nP(m, \u03bdT ) = e\u2212\u03bdT (\u03bdT )m\nm!\n\n(5)\n\nWe assume that all neurons i receive an input rate \u03bdi, except the winning neuron which\nreceives a higher rate \u03bdk. All neurons are completely discharged at t = 0.\nThe network will make a correct decision at time T , if the winner crosses threshold exactly\nthen with its nth input spike, while all other neuron received less than n spikes until then.\nThe winner receives the nth input spike at T , if it received n\u22121 input spikes in [0; T [ and\none at time T . This results in the probability density function\n\n(6)\nThe probability that the other N\u22121 neurons receive less or equal than n\u22121 spikes in [0; T [\nis\n\npk(T ) = \u03bdkP(n\u22121, \u03bdkT )\n\nP0(T ) =\n\nP(j, \u03bdiT )\n\n(7)\n\nNY\n\n\uf8eb\uf8ed n\u22121X\n\ni=1\ni6=k\n\nj=0\n\n\uf8f6\uf8f8\n\nFor a correct decision, the output spike of the winner can happen at any time T > 0, so we\nintegrate over all times T :\n\n\u221eZ\n\n\u03bdkP(n\u22121, \u03bdkT ) \u00b7 NY\n\n n\u22121X\n\n!\n\nP(j, \u03bdiT )\n\ndT\n\n(8)\n\nP =\n\npk(T ) \u00b7 P0(T ) dT =\n\n\u221eZ\n\n0\n\n0\n\nj=1\ni6=k\n\ni=0\n\n\fWe did not \ufb01nd a closed solution for this integral, but we can discuss its properties n is\nvaried by changing the synaptic ef\ufb01cacies. For n = 1 every input spike elicits an output\nspike. The probability of a having an output spike from neuron k is then directly dependent\non the input rates, since no computation in the network takes place. For n \u2192 \u221e, the\nintegration times to determine the rates of the Poisson-distributed input spike trains are\nlarge, and the neurons perform a good estimation of the input rate. The network can then\ndiscriminate small changes in the input frequencies. This gain in precision leads a slow\nresponse time of the network, since a large number of input spike is integrated before an\noutput spike of the network.\n\nThe winner-take-all architecture can also be used with a latency spike code. In this case,\nthe delay of the input spikes after a global reset determines the strength of the signal. The\nwinner is selected after the \ufb01rst input spike to the network (nk = 1). If all neurons are\ndischarged at the onset of the stimulus, the network does not require the global reset. In\ngeneral, the computation is \ufb01nished at a time nk\u00b7\u2206k after the stimulus onset.\n\n3 Results\n\nWe implemented this architecture on a chip with 64 integrate-and-\ufb01re neurons implemented\nin analog VLSI technology. These neurons follow the model equation 1, except that they\nalso show a small linear leakage. Spikes from the neurons are communicated off-chip\nusing an asynchronous event representation transmission protocol (AER). When a neuron\nspikes, the chip outputs the address of this neuron (or spike) onto a common digital bus (see\nFigure 3). An external spike interface module (consisting of a custom computer board that\ncan be programmed through the PCI bus) receives the incoming spikes from the chip, and\nretransmits spikes back to the chip using information stored in a routing table. This module\ncan also monitor spike trains from the chip and send spikes from a stored list. Through\nthis module and the AER protocol, we implement the connectivity needed for the winner-\ntake-all network in Figure 1. All components have been used and described in previous\nwork [Boahen, 2000, Liu et al., 2001].\n\nFigure 3: The connections are implemented by transmitting spikes over a common bus\n(grey arrows). Spikes from aVLSI neurons in the network are recorded by the digital\ninterface and can be monitored and rerouted to any neuron in the array. Additionally,\nexternally generated spike trains can be transmitted to the array through the sequencer.\n\nWe con\ufb01gure this network according to the constraints which are described above. Figure 4\nillustrates the network behaviour with a spike raster plot. At time t = 0, the neurons\nreceive inputs with the same regular \ufb01ring frequency of 100Hz except for one neuron which\nreceived a higher input frequency of 120Hz. The synaptic ef\ufb01cacies were tuned so that\nthreshold is reached with 6 input spikes, after which the network does select the neuron\nwith the strongest input as the winner.\n\nWe characterized the discrimination capability of the winner-take-all implementation by\n\nneuron arrayspike interface modulemonitorsequencereroute\fFigure 4: Example raster plot of the spike trains to and from the neurons: (a) Input: starting\nfrom 0 ms, the neurons are stimulated with spike trains of a regular frequency of 100Hz,\nbut randomized phase. Neuron number 42 receives an input spike train with an increased\nfrequency of 120Hz. (b) Output without WTA connectivity: after an adjustable number\nof input spikes, the neurons start to \ufb01re with a regular output frequency. The output fre-\nquencies of the neurons are slightly different due to mismatch in the synaptic ef\ufb01cacies.\nNeuron 42 has the highest output frequency since it receives the strongest input. (c) Output\nwith WTA connectivity: only neuron 42 with the strongest input \ufb01res, all other neurons are\nsuppressed.\n\nmeasuring to which minimal frequency, compared to the other input, the input rate to this\nneuron has to be raised to select it as the winner. The neuron being tested receives an input\nof regular frequency of f \u00b7100Hz, while all other neuron receive 100Hz. The histogram\nof the minimum factors f for all neurons is shown in Figure 5. On average, the network\ncan discriminate a difference in the input frequency of 10%. This value is identical with\nthe variation in the synaptic ef\ufb01cacies of the neurons, which had been compensated to a\nmismatch of 10%. We can therefore conclude that the implemented winner-take-all net-\nwork functions according to the above discussion of the constraints. Since only the timing\ninformation of the spike trains is used, the results can be extended to a wide range of input\nfrequencies different from 100Hz.\n\nTo test the performance of the network with Poisson inputs, we stimulated all neurons with\nPoisson-distibuted spike rates of rate \u03bd, except neuron k which received the rate \u03bdk = f \u03bd.\nEqn. 8 then simpli\ufb01es to\n\nP =\n\nf \u03bd P(n\u22121, f \u03bd T ) \u00b7\n\nP (i, \u03bdT )\n\ndT\n\n(9)\n\n0\n\ni=0\n\nWe show measured data and theoretical predictions for a winner-take-all network of 2 and\n8 neurons (Fig. 6). Obviously, the discrimation performance of the network is substan-\ntially limited by the Poisson nature of the spike trains compared to spike trains of regular\nfrequency.\n\n\u221eZ\n\n n\u22121X\n\n!N\u22121\n\n1326413264\u22125005010015013264Time [ms](a)(b)(c)\fFigure 5: Discrimination capability of the winner-take-all network: X-axis: factor f to\nwhich the input frequency of a neuron has to be increased, compared to the input rate of\nthe other neurons, in order for that neuron to be selected as the winner. Y-axis: histogram\nof all 64 neurons.\n\nFigure 6: Probability of a correct decision of the winner-take-all network, versus difference\nin frequencies (left), and number of input spikes n for a neuron to reach threshold (right).\nThe measured data (crosses/circles) is shown with the prediction of the model (continuous\nlines), for a winner-take-all network of 2 neurons (red,circles) and 8 neurons (blue, crosses).\n\n4 Conclusion\n\nWe analysed the performance and behavior of a winner-take-all spiking network that re-\nceives input spike trains. The neuron that receives spikes with the highest rate is se-\nlected as the winner after a pre-determined number of input spikes. Assuming a non-leaky\nintegrate-and-\ufb01re model neuron with constant synaptic weights, we derived constraints for\nthe strength of the inhibitory connections and the self-excitatory connection of the neu-\nron. A large inhibitory synaptic weight is in agreement with previous analysis for analog\ninputs [Jin and Seung, 2002]. The ability of a single spike from the inhibitory neuron to\ninhibit all neurons removes constraints on the matching of the time constants and ef\ufb01cacy\nof the connections from the excitatory neurons to the inhibitory neuron and vice versa. This\nfeature makes the computation tolerant to variance in the synaptic parameters as demon-\nstrated by the results of our experiment.\n\nWe also studied whether the network is able to select the winner in the case of input spike\ntrains which have a Poisson distribution. Because of the Poisson distributed inputs, the\nnetwork does not always chose the right winner (that is, the neuron with the highest input\n\n11.051.11.151.2024681012Increase factor f # neurons11.21.41.61.8200.20.40.60.81fincrease (n=8)Pcorrect246800.20.40.60.81n (fincrease=1.5)Pcorrect\ffrequency) but there is a certain probability that the network does select the right winner.\nResults from the network show that the measured probabilities match that of the theoret-\nical results. We are currently extending our analysis to a leaky integrate-and-\ufb01re neuron\nmodel and conductance-based synapses, which results in a more complex description of\nthe network.\n\nAcknowledgments\n\nThis work was supported in part by the IST grant IST-2001-34124. We acknowledge Se-\nbastian Seung for discussions on the winner-take-all mechanism.\n\nReferences\n\n[Boahen, 2000] Boahen, K. A. (2000). Point-to-point connectivity between neuromorphic\nchips using address-events. IEEE Transactions on Circuits & Systems II, 47(5):416\u2013434.\n[Douglas and Martin, 2004] Douglas, R. and Martin, K. (2004). Cortical microcircuits.\n\nAnnual Review of Neuroscience, 27(1f).\n\n[Hahnloser et al., 2000] Hahnloser, R., Sarpeshkar, R., Mahowald, M. A., Douglas, R. J.,\nand Seung, S. (2000). Digital selection and analogue ampli\ufb01cation coexist in a cortex-\ninspired silicon circuit. Nature, 405:947\u2013951.\n\n[Indiveri, 2000] Indiveri, G. (2000). Modeling selective attention using a neuromorphic\n\nanalog VLSI device. Neural Computation, 12(12):2857\u20132880.\n\n[Itti et al., 1998] Itti, C., Niebur, E., and Koch, C. (1998). A model of saliency-based fast\nvisual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and\nMachine Intelligence, 20(11):1254\u20131259.\n\n[Jin and Seung, 2002] Jin, D. Z. and Seung, H. S. (2002). Fast computation with spikes in\n\na recurrent neural network. Physical Review E, 65:051922.\n\n[Lazzaro et al., 1989] Lazzaro, J., Ryckebusch, S., Mahowald, M. A., and Mead, C. A.\n(1989). Winner-take-all networks of O(n) complexity.\nIn Touretzky, D., editor, Ad-\nvances in Neural Information Processing Systems, volume 1, pages 703\u2013711. Morgan\nKaufmann, San Mateo, CA.\n\n[Lee et al., 1999] Lee, D., Itti, C., Koch, C., and Braun, J. (1999). Attention activates\n\nwinner-take-all competition among visual \ufb01lters. Nature Neuroscience, 2:375\u2013381.\n\n[Liu et al., 2001] Liu, S.-C., Kramer, J., Indiveri, G., Delbr\u00a8uck, T., Burg, T., and Douglas,\nR. (2001). Orientation-selective aVLSI spiking neurons. Neural Networks: Special\nIssue on Spiking Neurons in Neuroscience and Technology, 14(6/7):629\u2013643.\n\n[Oster and Liu, 2004] Oster, M. and Liu, S.-C. (2004). A winner-take-all spiking network\nwith spiking inputs. In 11th IEEE International Conference on Electronics, Circuits and\nSystems. ICECS \u201904: Tel Aviv, Israel, 13\u201315 December.\n\n\f", "award": [], "sourceid": 2852, "authors": [{"given_name": "Matthias", "family_name": "Oster", "institution": null}, {"given_name": "Shih-Chii", "family_name": "Liu", "institution": null}]}