{"title": "Contraction Properties of VLSI Cooperative Competitive Neural Networks of Spiking Neurons", "book": "Advances in Neural Information Processing Systems", "page_first": 1073, "page_last": 1080, "abstract": null, "full_text": "Contraction Properties of VLSI Cooperative\n\nCompetitive Neural Networks of Spiking Neurons\n\nEmre Neftci1, Elisabetta Chicca1, Giacomo Indiveri1, Jean-Jacques Slotine2, Rodney Douglas1\n\n1Institute of Neuroinformatics, UNI|ETH, Zurich\n\n2Nonlinear Systems Laboratory, MIT, Cambridge, Massachusetts, 02139\n\nemre@ini.phys.ethz.ch\n\nAbstract\n\nA non\u2013linear dynamic system is called contracting if initial conditions are for-\ngotten exponentially fast, so that all trajectories converge to a single trajectory.\nWe use contraction theory to derive an upper bound for the strength of recurrent\nconnections that guarantees contraction for complex neural networks. Speci\ufb01-\ncally, we apply this theory to a special class of recurrent networks, often called\nCooperative Competitive Networks (CCNs), which are an abstract representation\nof the cooperative-competitive connectivity observed in cortex. This speci\ufb01c type\nof network is believed to play a major role in shaping cortical responses and se-\nlecting the relevant signal among distractors and noise. In this paper, we analyze\ncontraction of combined CCNs of linear threshold units and verify the results of\nour analysis in a hybrid analog/digital VLSI CCN comprising spiking neurons and\ndynamic synapses.\n\n1 Introduction\n\nCortical neural networks are characterized by a large degree of recurrent excitatory connectivity,\nand local inhibitory connections. This type of connectivity among neurons is remarkably similar,\nacross all areas in the cortex [1]. It has been argued that a good candidate model for a canonical\nmicro-circuit, potentially used as a general purpose cortical computational unit in the cortices, is\nthe soft Winner-Take-All (WTA) circuit [1], or the more general class of Cooperative Competitive\nNetworks [2] (CCN). A CCN is a set of interacting neurons, in which cooperation is achieved by lo-\ncal recurrent excitatory connections and competition is achieved via a group of inhibitory neurons,\ndriven by the excitatory neurons and inhibiting them (see Figure 1). As a result, CCNs perform\nboth common linear operations as well as complex non\u2013linear operations. The linear operations\ninclude analog gain (linear ampli\ufb01cation of the feed\u2013forward input, mediated by the recurrent ex-\ncitation and/or common mode input), and locus invariance [3]. The non\u2013linear operations include\nnon\u2013linear selection or soft winner\u2013take\u2013all (WTA) behavior [2, 4, 5], signal restoration [4, 6], and\nmulti\u2013stability [2, 5]. CCN networks can be modeled using linear threshold units, as well as re-\ncurrent networks of spiking neurons. The latter can be ef\ufb01ciently implemented in silicon using\nIntegrate\u2013and\u2013Fire (I&F) neurons and dynamic synapses [7]. In this work we use a prototype VLSI\nCCN device, comprising 128 low power I&F neurons [8] and 4096 dynamic synapses [9] that op-\nerate in real-time, in a massively parallel fashion. The main goal of this paper is to address the\nopen question of how to determine network parameters, such as the strength of recurrent excitatory\ncouplings or global inhibitory couplings, to create well\u2013behaving complex networks composed of\ncombinations of neural computational modules (such as CCNs) as depicted in Figure 1. The theoret-\nical foundations used to address these problems are based on contraction theory [10]. By applying\nthis theory to CCN models of linear threshold units and to combinations of them we \ufb01nd upper\nbounds to contraction conditions. We then test the theoretical results on the VLSI CCN of spiking\nneurons, and on a combination of two mutually coupled CCNs. We show how the experimental data\npresented are consistent with the theoretical predictions.\n\n1\n\n\fFigure 1: CCNs and combinations of CCNs. (a) A CCN consisting of a population of nearest neigh-\nbor connected excitatory neurons (blue) receiving external input and an inhibitory neuron which\nreceives input from all the excitatory neurons and inhibits them back (red). (b) Photo of the VLSI\nCCN Chip comprising I&F neurons. (c) Three coupled CCNs, showing examples of connectivity\npatterns between them.\n\n2 CCN of linear threshold units\n\nNeural network models of linear threshold units (LTUs) ignore many of the non\u2013linear processes\nthat occur at the synaptic level and contain, by de\ufb01nition, no information about spike timing. How-\never networks of LTUs can functionally behave as networks of I&F neurons in a wide variety of\ncases [11]. Similarly boundary conditions found for LTU networks can be often applied also to their\nI&F neuron network counterparts. For this reason, we start by analyzing a network of LTUs whose\nstructure is analogous to the one of the VLSI CCN of I&F neurons, and derive suf\ufb01cient boundary\nconditions for contraction.\n\nIf we consider a CCN of recurrently connected LTUs according to a weight matrix W, as shown on\nFigure 1, we can express the network dynamics as:\n\n\u03c4i\n\nd\n\ndt\n\nxi = \u2212xi + g((W x)i + bi) \u2200i = 1, ..., N\n\n(1)\n\nWhere N is the total number of neurons in the system, the function g(x) = max(x, 0) is a half\u2013wave\nrecti\ufb01cation non\u2013linearity to ensure that x \u2261 (x1, ..., xN)\u22a4 remains positive, bi are the external inputs\napplied to the neurons and \u03c4i are the time constants of the neurons. We assume that neurons of each\ntype (i.e. excitatory or inhibitory) have identical dynamics: we denote the time constant of excitatory\nneurons with \u03c4ex and the one of inhibitory neurons with \u03c4ih. Throughout the paper, we will use the\nfollowing notation for the weights: ws for self excitation, we1, we2 for 1st and 2nd nearest neighbor\nexcitation respectively, and wie, wei for inhibitory to excitatory neuron and vice versa. The W matrix\nhas the following shape:\n\nW =\n\n\uf8eb\n\uf8ec\uf8ec\uf8ed\n\nwsel f w1 w2\n\nw1\nwie\n\n0\n\nw2\nwie wie wie wie\n\nw2\n\nw1 \u2212wei\n0\n...\n. . .\nw2 w1 wsel f \u2212wei\n\nwie\n\n0\n\n\uf8f6\n\uf8f7\uf8f7\uf8f8\n\n(2)\n\nA CCN can be used to implement a WTA computation. Depending on the strength of the connec-\ntions, a CCN can implement a Hard (HWTA) or Soft (SWTA) WTA. A HWTA implements a max\noperation or selection mechanism: only the neuron receiving the strongest input can be active and\nall other neurons are suppressed by global inhibition. A SWTA implements more complex operation\nsuch as non\u2013linear selection, signal restoration, and multi\u2013stability: one or several groups of neu-\nrons can be active at the same time, neurons belonging to the same group cooperate through local\nexcitation, different groups compete through global inhibition. The activity of the \u2018winning\u2019 group\nof neurons can be ampli\ufb01ed while other groups are suppressed. Depending on the strength of in-\nhibitory and excitatory couplings different regimes are observed. Speci\ufb01cally, in Sec. 4 we compare\na weakly coupled con\ufb01guration, which guarantees contraction, with a strongly coupled con\ufb01gura-\ntion in which the output of the network depends on the input and the history, showing hysteretic\n(non\u2013contracting) behaviors in which the selected \u2018winning\u2019 group has advantages over other group\nof neurons because of the recurrent excitation.\n\n2\n\n\f3 Contraction theory applied to CCNs of linear threshold units\n\n3.1 Contraction of a single network\n\nA formal analysis of contraction theory applied to non\u2013linear systems has been described in [10,12].\nHere we present an overview of the theory applied to the system of Eq. (1).\nIn a contracting system, all the trajectories converge to a single trajectory exponentially fast inde-\npendent of the initial conditions. In particular, if the system has a steady state solution then, by\nde\ufb01nition, the state will contract and converge to that solution exponentially fast. Formally, the sys-\ntem is contracting if d\ndt k \u03b4 x k is uniformally negative (i.e. negative in the entire state space) where\n\u03b4 x corresponds to the distance between two neighboring trajectories at a given time. In fact, by path\ndt R P1\nintegration, we have d\nk \u03b4 x k< 0 where P1 and P2 are two points of state space (non-necessarily\nneighboring). This leads to the following theorem:\n\nP2\n\nConsider a system whose dynamics is given by the differential equations d\ndt x = f(x,t). The system is\nsaid to be contracting if all its trajectories converge exponentially to a single trajectory. A suf\ufb01cient\ncondition is that the symmetric part of the Jacobian J = \u2202\n\u2202 x f is uniformly negative de\ufb01nite. This\ncondition can be written more explicitly as\n\n\u2203\u03b2 > 0 , \u2200x, \u2200t \u2265 0 Js \u2261\n\n(J + J\u22a4) \u2264 \u2212\u03b2 I\n\n1\n\n2\n\nwhere I is the identity matrix and Js is the symmetric part of J It is equivalent to Js having all its\neigenvalues uniformly negative [13].\n\nWe can de\ufb01ne more generally a local coordinate transformation \u03b4 z=\u0398\u03b4 x, where \u0398(x,t) is a square\nmatrix, such that M(x,t) = \u0398T \u0398 is a uniformly positive de\ufb01nite, symmetric and continuously dif-\nferentiable metric. Note that the coordinate system z(x,t) does not need to exist, and will not in the\ngeneral case, but \u03b4 z and \u03b4 z\u22a4\u03b4 z can always be de\ufb01ned [14]. Then, in this metric one can compute\n\u0398 + \u0398J)\u0398\u22121. If the symmetric part of the generalized Jacobian,\nthe generalized Jacobian F = ( d\ndt\nFs, is negative de\ufb01nite then the system is contracting. In a suitable metric it has been shown that\nthis condition becomes suf\ufb01cient and necessary [10]. In particular, if \u0398 is constant, Fs is negative\nde\ufb01nite if and only if (MF)s is negative de\ufb01nite. In fact, as Fs = (\u0398\u22121)T (MJ)s\u0398\u22121, then the con-\ndition vT Fsv < 0 \u2200v \u2208 RN (negative de\ufb01nite matrix) is equivalent to (vT (\u0398\u22121)T )(MJ)s(\u0398\u22121v) <\n0 \u2200\u0398\u22121v \u2208 RN . Consequently, we can always choose a constant M to simplify our equations.\n\nLet us now see under which conditions the system de\ufb01ned by Eq. (1) is contracting. Except for the\nrecti\ufb01cation non\u2013linearity, the full system is a linear time\u2013invariant (LTI) system, and it has a \ufb01xed\npoint [15]. A common alternative to the half-wave recti\ufb01cation function is the sigmoid, in which\ncase the Jacobian becomes differentiable. If we de\ufb01ne fi(x,t) as\n\nfi(x,t) \u2261\n\nd\n\ndt\n\nxi = \u2212\n\n1\n\u03c4i\n\nxi +\n\n1\n\u03c4i\n\ng((Wx)i + bi)\n\n(3)\n\nthen the Jacobian matrix is given by Ji j = \u2202\ng\u2032(yi) wi j, where yi = (Wx)i + b\n\u2202 x j\nand \u03c4i is the time constant of neuron i, with \u03c4i = \u03c4ex for the excitatory neurons and \u03c4i = \u03c4ih for the\ninhibitory ones. We assume that the wei and wie weights are not zero so we can use the constant\nmetric:\n\nfi(x,t) = \u2212 1\n\u03c4i\n\n\u03b4i j + 1\n\u03c4i\n\nwhich is positive de\ufb01nite. With this metric, MJ can be written MJ = \u2212I + D K, where Di j =\ng\u2032(yi)\u03b4i j , and K is similar to W but with wei in place of wie. Since g is sigmoidal (and thus it and its\nderivative are both bounded), we can then use the method proposed in [16] to determine a suf\ufb01cient\ncondition for contraction. This leads to a condition of the form \u03bbmax < 0, where\n\n\u03bbmax = 2we1 + 2we2 + ws \u2212 1\n\n(5)\n\nA graphical representation of the boundaries de\ufb01ned by this contraction condition is provided in\nFigure 2. The term |\u03bbmax| is called the contraction rate with respect to metric M. It is of particular\ninterest because it is a lower bound for the rate at which the system converges to its solution in that\nmetric.\n\n3\n\nM =\uf8eb\n\uf8ec\uf8ed\n\n\u03c4ex\n...\n0\n\n0\n. . .\n\n. . .\n\n0\n...\n\u03c4ih\n\nwei\nwie\n\n\uf8f6\n\uf8f7\uf8f8\n\n(4)\n\n\fFigure 2: Qualitative phase diagram for a single CCN of LTUs. We show here the possible regimes\nof the given in Eq. (1) as a function of excitation and inhibition. In the region D the rates would\ngrow without bounds if there were no refractory period for the neurons. We see that a system which\nis unstable without inhibition cannot be in region A (i.e. within the boundaries of Eq. (5)). Note,\nhowever, that we do not quantitatively know the boundaries between B and C and between C and D\n\n3.2 Contraction of feed\u2013back combined CCNs\n\nOne of the powerful features of contraction theory is the following: if a complex system is composed\nof coupled (feed\u2013forward and feed\u2013back) subsystems that are individually contracting, then it is\npossible to \ufb01nd a suf\ufb01cient condition for contraction without computing the system\u2019s full Jacobian.\nIn addition it is possible to compute a lower bound for the full system\u2019s contraction rate. Let Fs\nbe the symmetric part of the Jacobian of two bi\u2013directionally coupled subsystems, with symmetric\nfeed\u2013back couplings. Then Fs can be written with four blocks of matrices:\n\nFs =(cid:20) F1s G\n\nG\u22a4 F2s (cid:21)\n\n(6)\n\nwhere F1s and F2s refer to the Jacobian of the individual, decoupled subsystems, while G and G\u22a4 are\nthe feed\u2013back coupling components. If we assume both subsystems are contracting, then a suf\ufb01cient\ncondition for contraction of the overall system is given by [17]:\n\n|\u03bbmax(F1s)| |\u03bbmax(F2s)| > \u03c3 2(G) \u2200t > 0, uni f ormly\n\n(7)\n\nwhere |\u03bbmax(\u00b7)| is the contraction rate with respect to the used metric and \u03c3 (G) is the largest eigen-\nvalue of G\u22a4G. By the eigenvalue interlacing theorem [13] we have that the contraction rate of the\ncombined system is given by \u03bbmax(Fs) \u2264 mini \u03bb (Fis)\n\ni = 1, 2.\n\nFor the speci\ufb01c example of a combined system comprising two identical subsystems coupled by a\nuniform coupling matrix G =w f b \u00b7 I we have \u03c3 2(G) = w2\nf b. The combined system is contracting if:\n\n|w f b| < \u03bbmax\n\n(8)\n\nThe results obtained with this analysis can be generalized to more than two combined subsystems,\nand with different types of coupling matrices [17]. Note that in a feed\u2013forward or a negative\u2013\nfeedback case (i.e. at least one of the \u2018G\u2013blocks\u2019 in the non\u2013symmetric form is negative semide\ufb01-\nnite), the system is automatically contracting provided that both subsystems are contracting. Given\nthis, the condition for contraction of the combined system described by Eq. (8) becomes: w f b < \u03bbmax.\nNote that the contraction rate is an observable quantity, therefore one can build a contracting system\nconsisting of an arbitrary number of CCNs as follows: 1. Determine the contraction rate of two\nCCNs by using Eq. (5) or by measuring it. 2. Use Eq. (7) to set the weight of the relation. Compute\nthe upper bound to the contraction rate of the combined system as explained above. 3. Repeat the\nprocedure for a new CCN and the combined one.\n\n4 Contraction in a VLSI CCN of spiking neurons\n\nThe VLSI device used in this work implements a CCN of spiking neurons using an array of low\u2013\npower I&F neurons with dynamic synapses [8, 18]. The chip has been fabricated using a standard\nAMS 0.35\u00b5m CMOS process, and covers an area of about 10 mm2.\nIt contains 124 excitatory\nneurons with self, 1st , 2nd , 3rd nearest\u2013neighbor recurrent excitatory connections and 4 inhibitory\nneurons (all\u2013to\u2013all bi\u2013directionally connected to the excitatory neurons). Each neuron receives\ninput currents from a row of 32 afferent plastic synapses that use the Address Event Representation\n(AER) to receive spikes. The spiking activity of the neurons is also encoded using the AER. In this\nrepresentation input and output spikes are real\u2013time asynchronous digital events that carry analog\ninformation in their temporal structure. We can interface the chip to a workstation, for prototyping\n\n4\n\n\f120\n\n100\n\n80\n\n60\n\n40\n\n20\n\nn\no\nr\nu\ne\nN\n\n0\n\n5\n\nTime [s]\n\n100\n\n50\n\n120\n\n100\n\n80\n\n60\n\n40\n\n20\n\nn\no\nr\nu\ne\nN\n\n100\n\n50\n\n0\n0\nFrequency [Hz]\n\n20\n\n40\n\n0\n\n5\n\nTime [s]\n\n0\n0\nFrequency [Hz]\n\n20\n\n40\n\n]\nz\nH\n\n[\n \ne\nt\na\nR\n\n40\n\n30\n\n20\n\n10\n\n0\n0\n\n50\n\nNeuron\n\n100\n\n128\n\n(a) Single trial input stimulus\n\n(b) Single trial CCN response\n\n(c) Multiple trials mean response\n\nFigure 3: Contraction of a single VLSI CCN. (a) A raster plot of the input stimulus(left) and the\nmean \ufb01ring rates(right):\nthe membrane potential of the I&F neurons are set to a random initial\nstate by stimulating them with uncorrelated Poisson spike trains of constant mean frequency (up\nto the dashed line). Then the network is stimulated with 2 Gaussian bumps of different amplitude\ncentered at Neuron 30 and Neuron 80, while, all the neurons received a constant level of uncorrelated\ninput during the whole trial. (b) The response of the CCN to the stimulus presented in (a). (c)\nMean responses of 100 trials, calculated after the red dashed line with error bars. The shaded area\nrepresents the mean input stimulus presented throughout the experiment. The system selects the\nlargest input and suppresses the noise and the smaller bump, irrespective of initial conditions and\nnoise. Neurons 124 to 128 are inhibitory neurons and do not receive external input.\n\nexperiments using a dedicated PCI\u2013AER board [19]. This board allows us to stimulate the synapses\non the chip (e.g. with synthetic trains of spikes), monitor the activity of the I&F neurons, and map\nevents from one neuron to a synapse belonging to a neuron on the same chip and/or on a different\nchip. An analysis of the dynamics of our VLSI I&F neurons can be found in [20] and although the\nleakage term in our implemented neurons is constant, it has been shown that such neurons exhibit\nresponses qualitatively similar to standard linear I&F neurons [20].\n\nA steady state solution is easily computable for a network of linear threshold units [5, 21]: it is\na \ufb01xed point in state space, i.e. a set of activities for the neurons.\nIn a VLSI network of I&F\nneurons the steady state will be modi\ufb01ed by mismatch and the activities will \ufb02uctuate due to external\nand microscopic perturbations (but remain in its vicinity if the system is contracting). To prove\ncontraction experimentally in these types of networks, one would have to apply an input and test with\nall possible initial conditions. This is clearly not possible, but we can verify under which conditions\nthe system is compatible with contraction by repeating the same experiment with different initial\nconditions (see Sec. 4.1) and under which conditions the system is not compatible with contraction\nby observing if system settles to different solutions when stimulated with different initial conditions\n(see Sec. 4.3).\n\n4.1 Convergence to a steady state with a static stimulus\n\nThe VLSI CCN is stimulated by uncorrelated Poisson spike trains whose mean rates form two\nGaussian\u2013shaped bumps along the array of neurons, one with a smaller amplitude than the other\nsuperimposed to background noise (see Figure 3a). In a SWTA con\ufb01guration, our CCNs should\nselect and amplify the largest bump while suppressing the smaller one and the noise. We set the\nneurons into random initial conditions by stimulating them with uncorrelated Poisson spike trains\nwith a spatially uniform and constant mean rate, before applying the real input stimulus (before\nthe dashed line in Figure 3a ). Figure 3b shows the response of the CCN to this spike train, and\nFigure 3c is the response averaged over 100 trials. This experiment shows that regardless of the\ninitial conditions, the \ufb01nal response of the CCN in an SWTA con\ufb01guration is always the same (see\nthe small error bars on Figure 3c), as we would expect from a contracting system.\n\n4.2 Convergence with non\u2013static stimulus and contraction rate\n\nAs the condition for contraction does not depend on the external input, it will also hold for time\u2013\nvarying inputs. For example an interesting input stimulus is a bump of activity moving along the\narray of neurons at a constant speed. In this case, the \ufb01ring rates produced by the chip carry informa-\n\n5\n\n\f120\n\n100\n\n80\n\n60\n\n40\n\n20\n\nn\no\nr\nu\ne\nN\n\n120\n\n100\n\n80\n\n60\n\n40\n\n20\n\nn\no\nr\nu\ne\nN\n\n80\n\n60\n\n40\n\n20\n\n0\n\n80\n\n60\n\n40\n\n20\n\n0\n\n)\n.\n\n \n\nx\na\nm\no\nt\n \nd\ne\nz\ni\nl\n\nm\nr\no\nN\n\n(\n \n\ne\nt\na\nR\n\n1\n\n0.8\n\n0.6\n\n0.4\n\n0.2\n\n0\n\n2\n\n4\nTime [s]\n\n6\n\n2\n\n4\nTime [s]\n\n6\n\nSequencer\nWeak CCN\nStrong CCN\n\n3.4\n\n3.6\n\n3.8\n4\nTime [s]\n\n4.2\n\n4.4\n\n(a) Single trial weak CCN\n\n(b) Single trial strong CCN\n\n(c) Activity of neuron #25\n\nFigure 4: Contraction rate in VLSI CCNs using non\u2013static stimulation. The input changed from an\ninitial stage, where all the neurons were randomly stimulated with constant mean frequencies (up to\n3 s), to a second stage in which the moving stimulus (freshly generated from trial to trial) is applied.\nThis stimulus consists of a bump of activity that is shifted from one neuron to the next. Panels (a)\nand (b) show trials for two different con\ufb01gurations (weak and strong) and the colors indicate the\n\ufb01ring rates calculated with a 300 ms sliding time window. The panel (c) compares the mean rates\nof neuron #25 in the weakly coupled CCN (green), the strong CCN (blue) and the input (red), all\nnormalized to their peak of activity and calculated over 50 trials. We see how the blue line is delayed\ncompared to the red and green line: the stronger recurrent couplings reduces the contraction rate.\n\ntion about the system\u2019s contraction rate. We measured the response of the chip to such a stimulus,\nfor both strong and weak recurrent couplings (see Figure 4). The strong coupling case produces\nslower responses to the input than the weak coupling case, as expected from a system having a\nlower contraction rate (see Figure 4c). The system\u2019s condition for contraction does not depend on\nthe individual neuron\u2019s time constants, although the contraction rate in the original metric does.\nThis also applies to the non\u2013static input case, where the system will converge to the expected solu-\ntion, independently of the neurons time constants. Local mismatch effects in the VLSI chip lead to\nan effective weight matrix whose elements wsel f , w1, w2, wie are not identical throughout the array.\nThis combined with the high gain of the strong coupling, and the variance produced by the input\nPoisson spike trains during the initial phase, explains the emergence of \u201cpseudo-random\u201d winners\naround neuron 30,60 and 80 in Figure 4b.\n\n4.3 A non\u2013contracting example\n\nWe expect a CCN to be non\u2013contracting when the coupling is strong: in this condition the CCN\nexhibits a hysteretic behavior [22], so the position of the winner strongly depends on the network\u2019s\ninitial conditions. Figure 5 illustrates this behavior with a CCN with very strong recurrent weights.\n\n4.4 Contraction of combined systems\n\nBy using a multi-chip AER communication infrastructure [19] we can connect multiple chips\ntogether with arbitrary connectivity matrices (e.g. G in Sec. 3.2), and repeat experiments analogous\nto the ones of Sec. 4.1. Figure 6 shows the response of two CCNs, combined via a connectivity\nmatrix as shown in Figure 6b, to three input bumps of activity in a contracting con\ufb01guration.\n\n5 Conclusion\n\nWe applied contraction theory to combined Cooperative Competitive Networks (CCN) of Linear\nThreshold Units (LTU) and determined suf\ufb01cient conditions for contraction. We then tested the the-\noretical predictions on neuromorphic VLSI implementations of CCNs, by measuring their response\nto different types of stimuli with different random initial conditions. We used these results to deter-\nmine parameter settings of single and combined networks of spiking neurons which make the system\nbehave as a contracting one. Similarly, we veri\ufb01ed experimentally that CCNs with strong recurrent\ncouplings are not contracting as predicted by the theory.\n\n6\n\n\f120\n\n100\n\n80\n\n60\n\n40\n\n20\n\nn\no\nr\nu\ne\nN\n\n0\n\n5\n\nTime [s]\n\n100\n\n50\n\n120\n\n100\n\n80\n\n60\n\n40\n\n20\n\nn\no\nr\nu\ne\nN\n\n100\n\n50\n\n120\n\n100\n\n80\n\n60\n\n40\n\n20\n\nn\no\nr\nu\ne\nN\n\n100\n\n50\n\n0\n0\n\n10\n\n20\n\nFrequency [Hz]\n\n0\n\n5\n\nTime [s]\n\n10\n\n0\n0\n\n20\n\n40\n\nFrequency [Hz]\n\n0\n\n5\n\nTime [s]\n\n10\n\n0\n0\n\n50 100\nFrequency [Hz]\n\n(a) Initial Cond. I\n\n(b) Weak CCN, contracting\n\n(c) Strong CCN, non-contracting\n\n120\n\n100\n\n80\n\n60\n\n40\n\n20\n\nn\no\nr\nu\ne\nN\n\n100\n\n50\n\n120\n\n100\n\n80\n\n60\n\n40\n\n20\n\nn\no\nr\nu\ne\nN\n\n100\n\n50\n\n120\n\n100\n\n80\n\n60\n\n40\n\n20\n\nn\no\nr\nu\ne\nN\n\n100\n\n50\n\n0\n\n5\n\nTime [s]\n\n10\n\n0\n0\n\n10\n\n20\n\nFrequency [Hz]\n\n0\n\n5\n\nTime [s]\n\n10\n\n0\n0\n\n20\n\n40\n\nFrequency [Hz]\n\n0\n\n5\n\nTime [s]\n\n10\n\n0\n0\n\n200 400\nFrequency [Hz]\n\n(d) Initial cond. II\n\n(e) Weak CCN, contracting\n\n(f) Strong CCN, non-contracting\n\nFigure 5: VLSI CCN in a non-contracting con\ufb01guration. We compare the CCN with very strong\nlateral recurrent excitation and low inhibition to a weakly coupled CCN. The \ufb01gures present the\nraster plot and mean rates of the CCNs response (calculated after the dashed line) to the same stimuli\nstarting from two different initial conditions. Panels (b) and (e) show the response of a contracting\nCCN, whereas panels (c) and (f) show that the system response depends on the initial conditions of\n(a) and (d). Therefore the the \"Strong CCN\" is non\u2013contracting.\n\n120\n\n100\n\n80\n\n60\n\n40\n\n20\n\nn\no\nr\nu\ne\nN\n\n120\n\n100\n\n80\n\n60\n\n40\n\n20\n\nn\no\nr\nu\ne\nN\n\n2\n\n4\n\nTime [s]\n\n6\n\n8\n\n(a) CCN1 response\n\n2\n\n4\n\nTime [s]\n\n6\n\n8\n\n30\n\n20\n\n10\n\n0\n\n30\n\n20\n\n10\n\n0\n\n120\n100\n80\n60\n40\n20\n\n \n\n1\nN\nC\nC\nn\no\nr\nu\ne\nN\n\n20 40 60 80 100 120\n\nNeuron CCN2\n\n(b) Connectivity matrix\n\n120\n\n100\n\n80\n\n60\n\n40\n\n20\n\nn\no\nr\nu\ne\nN\n\n30\n\n20\n\n10\n\n0\n\n30\n\n20\n\n10\n\n]\nz\nH\n\n[\n \n\ne\nt\na\nR\n\n0\n0\n\n2\n\n4\n\nTime [s]\n\n6\n\n8\n\n50\nNeuron\n\n100\n\n(c) CCN1 response\n\n(d) Trial CCN2\n\n(e) Mean response of CCNs\n\nFigure 6: Contraction in combined CCNs. (a) and (d) Single trial responses of CCN1 and CCN2\nto the input stimulus shown in (c); (b) Connectivity matrix that couples the two CCNs (inverted\nidentity matrix); (e) Mean response of CCNs, averaged over 20 trials (data points) superimposed to\naverage input frequencies (shaded area). The response of the coupled CCNs converged to the same\nmean solution, consistent with the hypothesis that the combined system is contracting.\n\n7\n\n\fAcknowledgments\n\nThis work was supported by the DAISY (FP6-2005-015803) EU grant, and by the Swiss National\nScience Foundation under Grant PMPD2-110298/1. We thank P. Del Giudice and V. Dante (ISS),\nfor original design of the PCI-AER board and A. Whatley for help with the software of the PCI-AER\nboard.\n\nReferences\n\n[1] R.J. Douglas and K.A.C. Martin. Neural circuits of the neocortex. Annual Review of Neuroscience,\n\n27:419\u201351, 2004.\n\n[2] S. Amari and M. A. Arbib. Competition and cooperation in neural nets. In J. Metzler, editor, Systems\n\nNeuroscience, pages 119\u201365. Academic Press, 1977.\n\n[3] D. Hansel and H. Somplinsky. Methods in Neuronal Modeling, chapter Modeling Feature Selectivity in\n\nLocal Cortical Circuits, pages 499\u2013567. MIT Press, Cambridge, Massachusetts, 1998.\n\n[4] P. Dayan and L.F. Abbott. Theoretical Neuroscience: Computational and Mathematical Modeling of\n\nNeural Systems. MIT Press, 2001.\n\n[5] R. Hahnloser, R. Sarpeshkar, M.A. Mahowald, R.J. Douglas, and S. Seung. Digital selection and analog\n\nampli\ufb01cation co-exist in an electronic circuit inspired by neocortex. Nature, 405(6789):947\u2013951, 2000.\n\n[6] R.J. Douglas, M.A. Mahowald, and K.A.C. Martin. Hybrid analog-digital architectures for neuromorphic\nsystems. In Proc. IEEE World Congress on Computational Intelligence, volume 3, pages 1848\u20131853.\nIEEE, 1994.\n\n[7] G. Indiveri. Synaptic plasticity and spike-based computation in VLSI networks of integrate-and-\ufb01re neu-\n\nrons. Neural Information Processing - Letters and Reviews, 2007. (In press).\n\n[8] G. Indiveri, E. Chicca, and R. Douglas. A VLSI array of low-power spiking neurons and bistable synapses\nIEEE Transactions on Neural Networks, 17(1):211\u2013221, Jan\n\nwith spike\u2013timing dependent plasticity.\n2006.\n\n[9] C. Bartolozzi and G. Indiveri. Synaptic dynamics in analog VLSI. Neural Computation, 19:2581\u20132603,\n\nOct 2007.\n\n[10] Winfried Lohmiller and Jean-Jacques E. Slotine. On contraction analysis for non-linear systems. Auto-\n\nmatica, 34(6):683\u2013696, 1998.\n\n[11] B. Ermentrout. Reduction of conductance-based models with slow synapses to neural nets. Neural\n\nComputation, 6:679\u2013695, 1994.\n\n[12] Jean-Jacques E. Slotine. Modular stability tools for distributed computation and control. International J.\n\nof Adaptive Control and Signal Processing, 17(6):397\u2013416, 2003.\n\n[13] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press, 1985.\n\n[14] Winfried Lohmiller and Jean-Jacques E. Soltine. Nonlinear process control using contraction theory.\n\nA.I.Ch.E. Journal, March 2000.\n\n[15] S. H. Strogatz. Nonlinear Dynamics and Chaos. Perseus Books, 1994.\n\n[16] O. Faugeras and J.-J. Slotine. Synchronization in neural \ufb01elds. 2007.\n\n[17] Wei Wang and Jean-Jacques E. Slotine. On partial contraction analysis for coupled nonlinear oscillators.\n\nBiological Cybernetics, 92(1):38\u201353, 2005.\n\n[18] C. Bartolozzi, S. Mitra, and G. Indiveri. An ultra low power current\u2013mode \ufb01lter for neuromorphic systems\nand biomedical signal processing. In IEEE Proceedings on Biomedical Circuits and Systems (BioCAS06),\npages 130\u2013133, 2006.\n\n[19] E. Chicca, G. Indiveri, and R.J. Douglas. Context dependent ampli\ufb01cation of both rate and event-\ncorrelation in a VLSI network of spiking neurons. In B. Sch\u00f6lkopf, J.C. Platt, and T. Hofmann, editors,\nAdvances in Neural Information Processing Systems 19, Cambridge, MA, Dec 2007. Neural Information\nProcessing Systems Foundation, MIT Press. (In press).\n\n[20] S. Fusi and M. Mattia. Collective behavior of networks with linear (VLSI) integrate and \ufb01re neurons.\n\nNeural Computation, 11:633\u201352, 1999.\n\n[21] H. Sebastion Seung Richard H. R. Hahnloser and Jean-Jacques Slotine. Permitted and forbidden sets in\n\nsymmetric threshold-linear networks. Neural Computation, 15:621\u2013638, 2003.\n\n[22] E. Chicca. A Neuromorphic VLSI System for Modeling Spike\u2013Based Cooperative Competitive Neural\n\nNetworks. PhD thesis, ETH Z\u00fcrich, Z\u00fcrich, Switzerland, April 2006.\n\n8\n\n\f", "award": [], "sourceid": 260, "authors": [{"given_name": "Emre", "family_name": "Neftci", "institution": null}, {"given_name": "Elisabetta", "family_name": "Chicca", "institution": null}, {"given_name": "Giacomo", "family_name": "Indiveri", "institution": null}, {"given_name": "Jean-jeacques", "family_name": "Slotine", "institution": null}, {"given_name": "Rodney", "family_name": "Douglas", "institution": null}]}