{"title": "A configurable analog VLSI neural network with spiking neurons and self-regulating plastic synapses", "book": "Advances in Neural Information Processing Systems", "page_first": 545, "page_last": 552, "abstract": "We summarize the implementation of an analog VLSI chip hosting a network of 32 integrate-and-fire (IF) neurons with spike-frequency adaptation and 2,048 Hebbian plastic bistable spike-driven stochastic synapses endowed with a self-regulating mechanism which stops unnecessary synaptic changes. The synaptic matrix can be flexibly configured and provides both recurrent and AER-based connectivity with external, AER compliant devices. We demonstrate the ability of the network to efficiently classify overlapping patterns, thanks to the self-regulating mechanism.", "full_text": "A con\ufb01gurable analog VLSI neural network with\nspiking neurons and self-regulating plastic synapses\n\nwhich classi\ufb01es overlapping patterns\n\nM. Giulioni\u2217\n\nM. Pannunzi\n\nItalian National Inst. of Health, Rome, Italy\n\nItalian National Inst. of Health, Rome, Italy\n\nINFN-RM1, Rome, Italy\n\nINFN-RM2, Rome, Italy\n\ngiulioni@roma2.infn.it\n\nD. Badoni\n\nINFN-RM2, Rome, Italy\n\nV. Dante\n\nItalian National Inst. of Health, Rome, Italy\n\nINFN-RM1, Rome, Italy\n\nP. Del Giudice\n\nItalian National Inst. of Health, Rome, Italy\n\nINFN-RM1, Rome, Italy\n\nAbstract\n\nWe summarize the implementation of an analog VLSI chip hosting a network\nof 32 integrate-and-\ufb01re (IF) neurons with spike-frequency adaptation and 2,048\nHebbian plastic bistable spike-driven stochastic synapses endowed with a self-\nregulating mechanism which stops unnecessary synaptic changes. The synaptic\nmatrix can be \ufb02exibly con\ufb01gured and provides both recurrent and AER-based con-\nnectivity with external, AER compliant devices. We demonstrate the ability of the\nnetwork to ef\ufb01ciently classify overlapping patterns, thanks to the self-regulating\nmechanism.\n\n1 Introduction\n\nNeuromorphic analog, VLSI devices [12] try to derive organizational and computational principles\nfrom biologically plausible models of neural systems, aiming at providing in the long run an elec-\ntronic substrate for innovative, bio-inspired computational paradigms.\n\nIn line with standard assumptions in computational neuroscience, neuromorphic devices are en-\ndowed with adaptive capabilities through various forms of plasticity in the synapses which connect\nthe neural elements. A widely adopted framework goes under the name of Hebbian learning, by\nwhich the ef\ufb01cacy of a synapse is potentiated (the post-synaptic effect of a spike is enhanced) if the\npre- and post-synaptic neurons are simultaneously active on a suitable time scale. Different mech-\nanisms have been proposed, some relying on the average \ufb01ring rates of the pre- and post-synaptic\nneurons, (rate-based Hebbian learning), others based on tight constraints on the time lags between\npre- and post-synaptic spikes (\u201cSpike-Timing-Dependent-Plasticity\u201d).\n\nThe synaptic circuits described in what follows implement a stochastic version of rate-based Heb-\nbian learning. In the last decade, it has been realized that general constraints plausibly met by any\nconcrete implementation of a synaptic device in a neural network, bear profound consequences on\n\n\u2217http://neural.iss.infn.it/\n\n1\n\n\fthe capacity of the network as a memory system. Speci\ufb01cally, once one accepts that a synaptic\nelement can neither have an unlimited dynamic range (i.e. synaptic ef\ufb01cacy is bounded), nor can it\nundergo arbitrarily small changes (i.e. synaptic ef\ufb01cacy has a \ufb01nite analog depth), it has been proven\n([1], [7]) that a deterministic learning prescription implies an extremely low memory capacity, and a\nsevere \u201cpalimpsest\u201d property: new memories quickly erase the trace of older ones. It turns out that a\nstochastic mechanism provides a general, logically appealing and very ef\ufb01cient solution: given the\npre- and post-synaptic neural activities, the synapse is still made eligible for changing its ef\ufb01cacy\naccording to a Hebbian prescription, but it actually changes its state with a given probability. The\nstochastic element of the learning dynamics would imply ad hoc new elements, were it not for the\nfact that for a spike-driven implementation of the synapse, the noisy activity of the neurons in the\nnetwork can provide the needed \u201cnoise generator\u201d [7]. Therefore, for an ef\ufb01cient learning electronic\nnetwork, the implementation of the neuron as a spiking element is not only a requirement of \u201cbiolog-\nical plausibility\u201d, but a compelling computational requirement. Learning in networks of spiking IF\nneurons with stochastic plastic synapses has been studied theoretically [7], [10], [2], and stochastic,\nbi-stable synaptic models have been implemented in silicon [8], [6]. One of the limitations so far,\nboth at the theoretical and the implementation level, has been the arti\ufb01cially simple statistics of the\nstimuli to be learnt (e.g., no overlap between their neural representations). Very recently in [4] a\nmodi\ufb01cation of the above stochastic, bi-stable synaptic model has been proposed, endowed with a\nregulatory mechanism termed \u201cstop learning\u201d such that synaptic up or down-regulation depends on\nthe average activity of the postsynaptic neuron in the recent past; a synapse pointing to a neuron that\nis found to be highly active, or poorly active, should not be further potentiated or depressed, respec-\ntively. The reason behind the prescription is essentially that for correlated patterns to be learnt by\nthe network, a successful strategy should de-emphasize the coherent synaptic Hebbian potentiation\nthat would result for the overlapping part of the synaptic matrix, and that would ultimately spoil the\nability to distinguish the patterns. A detailed learning strategy along this line was proven in [13] to\nbe appropriate for linearly separable patterns for a Perceptron-like network; the extension to spiking\nand recurrent networks is currently studied.\n\nIn section 2 we give an overview of the chip architecture and of the implemented synaptic model.\nIn section 3 we show an example of the measures effectuated on the chip useful to characterize the\nsynaptic and neuronal parameters. In section 4 we report some characterization results compared\nwith a theoretical prediction obtained from a chip-oriented simulation. The last paragraph describes\nchip performances in a simple classi\ufb01cation task, and illustrate the improvement brought about by\nthe stop-learning mechanism.\n\n2 Chip architecture and main features\n\nThe chip, already described in [3] implements a recurrent network of 32 integrate-and-\ufb01re neurons\nwith spike-frequency adaptation and bi-stable, stochastic, Hebbian synapses. A completely recon-\n\ufb01gurable synaptic matrix supports up to all-to-all recurrent connectivity, and AER-based external\nconnectivity. Besides establishing an arbitrary synaptic connectivity, the excitatory/inhibitory na-\nture of each synapse can also be set.\n\nThe implemented neuron is the IF neuron with constant leakage term and a lower bound for the\nmembrane potential V (t) introduced in [12] and studied theoretically in [9]. The circuit is borrowed\nfrom the low-power design described in [11], to which we refer the reader for details. Only 2\nneurons can be directly probed (i.e., their \u201cmembrane potential\u201d sampled), while for all of them the\nemitted spikes can be monitored via AER [5]. The dendritic tree of each neuron is composed of\nup to 31 activated recurrent synapses and up to 32 activated external, AER ones. For the recurrent\nsynapses, each impinging spike triggers short-time (and possibly long-term) changes in the state of\nthe synapse, as detailed below. Spikes from neurons outside the chip come in the form of AER\nevents, and are targeted to the correct AER synapse by the X-Y Decoder. Synapses which are set to\nbe excitatory, either AER or recurrent are plastic; inhibitory synapses are \ufb01xed. Spikes generated by\nthe neurons in the chip are arbitrated for access to the AER bus for monitoring and/or mapping to\nexternal targets.\n\nThe synaptic circuit described in [3] implements the model proposed in [4] and brie\ufb02y motivated in\nthe Introduction. The synapse possesses only two states of ef\ufb01cacy (a bi-stable device): the internal\nsynaptic dynamics is associated with an internal variable X; when X > \u03b8X the ef\ufb01cacy is set to be\n\n2\n\n\fpotentiated, otherwise is set to be depressed. X is subjected to short-term, spike-driven dynamics:\nupon the arrival of an impinging spike, X is candidate for an upward or downward jump, depending\non the instantaneous value of the post-synaptic potential Vpost being above or below a threshold \u03b8V .\nThe jump is actually performed or not depending on a further variable as explained below. In the\nabsence of intervening spikes X is forced to drift towards a \u201chigh\u201d or \u201clow\u201d value depending on\nwhether the last jump left it above or below \u03b8X. This preserves the synaptic ef\ufb01cacy on long time\nscale.\n\nA further variable is associated with the post-synaptic neuron dynamics, which essentially mea-\nsures the average \ufb01ring activity. Following [4], by analogy with the role played by the intracellular\nconcentration of calcium ions upon spike emission, we will call it a \u201ccalcium variable\u201d C(t). C(t)\nundergoes an upward jump when the postsynaptic neuron emits a spike, and linearly decays between\ntwo spikes. It therefore integrates the spikes sequence and, when compared to suitable thresholds as\ndetailed below, it determines which candidate synaptic jumps will be allowed to occur; for example,\nit can constrain the synapse to stop up-regulating because the post-synaptic neuron is already very\nactive. C(t) acts as a regulatory element of the synaptic dynamics.\nThe resulting short-term dynamics for the internal synaptic variable X is described by the following\nconditions: X(t) \u2192 X(t) + Jup if Vpost(t) > \u03b8V and VT H1 < C(t) < VT H3; X(t) \u2192 X(t) \u2212 Jdw\nif Vpost(t) \u2264 \u03b8V and VT H1 < C(t) < VT H2 where Jup and Jdw are positive constants. Detailed\ndescription of circuits implementing these conditions can be found in [3].\n\nIn \ufb01gure 1 we illustrate the effect of the calcium dynamics on X. Increasing input forces the post-\nsynaptic neuron to \ufb01re at increasing frequencies. As long as C(t) < VT H2 = VT H3 X undergoes\nboth up and down jumps. When C(t) > VT H2 = VT H3 jumps are inhibited and X is forced to drift\ntowards its lower bound.\n\nV (t)\n\npost\n\nC(t)\n\nX(t)\n\nV (t)\n\npre\n\nVTH2\n\n40 ms\n\n1V\n\n1V\n\n1V\n\n2V\n\nFigure 1: Illustrative example of the stop-learning mechanism (see text). Top to bottom: post-\nsynaptic neuron potential Vpost, calcium variable C, internal synaptic variable X, pre-synaptic neu-\nron potential Vpre\n\n3 LTP/LTD probabilities: measurement VS chip-oriented simulation\n\nWe report synapse potentiation (LTP) / depression (LTD)from the chip and we compare experimental\nresults to simulations.\n\nFor each synapse in a subset of 31, we generate a pre-synaptic poisson spike train at 70 Hz. The\npost synaptic neuron is forced to \ufb01re a poisson spike train by applying an external DC current and a\npoisson train of inhibitory spikes through AER. Setting to zero both the potentiated and depressed\nef\ufb01cacies, the activity of the post-synaptic neuron can be easily tuned by varying the amplitude of\nthe DC current and the frequency of the inhibitory AER train. We initialize the 31 (AER) synapses\nto depressed (potentiated) and we monitor the post-synaptic neuron activity during a stimulation\n\n3\n\n\ftrial lasting 0.5 seconds. At the end of the trial we read the synaptic state using an AER protocol\ndeveloped to this purpose. For each chosen value of the post-synaptic \ufb01ring rate, we evaluate the\nprobability to \ufb01nd synapses in a potentiated (depressed) state repeating the test 50 times. The results\nreported in \ufb01gure 2 (solid lines) represent the average LTP and LTD probabilities per trail over the 31\nsynapses. Tests were performed both with active and inactive Calcium mechanism. When calcium\nmechanism is inactive, the LTP is monotonically increasing with the post-synaptic \ufb01ring rate while\nwhen the calcium circuit is activated the LTP probability has a max form Vpost around 80 Hz.\nIdentical tests were also run in simulation (dashed curves in \ufb01gure 2). For the purpose of a meaning-\nful comparison with the chip behaviour relevant parameter affecting neural and synaptic dynamics\nand their distributions (due to inhomogenities and mismatches) are characterized.\n\nSimulated and measured data are in qualitative agreement. The parameters we chose for these tests\nare the same used for the classi\ufb01cation task described in the next paragraph.\n\nFraction of potentiated synapses\n\n0.6\n\n0.5\n\n0.4\n\n+\nw\n\n0.3\n\n0.2\n\n0.1\n\n0\n0\n\nExperiment: solid line\nSimulation: dashed line\n\n50\n\n100\n\n150\n\n\u03bd\npost [Hz]\n\n200\n\n250\n\n300\n\nFigure 2: Transition probabilities. Red and blue lines are LTP probabilities with and without cal-\ncium stop-learning mechanism respectively. Gray lines are LTD probabilities without calcium stop-\nlearning mechanism, the case LTD with Ca mechanism is not shown. Error bars are standard devia-\ntions over the 50 trials\n\n4 Learning overlapping patterns\n\nWe con\ufb01gured the synaptic matrix to have a perceptron like network with 1 output and 32 inputs (32\nAER synapses). 31 synapses are set as plastic excitatory ones, the 32nd is set as inhibitory and used\nto modulate the post-synpatic neuron activity. Our aim is to teach the perceptron to classify two\npatterns through a semi-supervised learning strategy: \u201cUp\u201d and \u201cDown\u201d. We expect that after learn-\ning the perceptron will respond with high output frequency for pattern \u201cUp\u201d and with low output\nfrequency for pattern \u201cDown\u201d. The self regulating Ca mechanism is exploited to improve perfor-\nmances when Up and Down patterns have a signi\ufb01cant overlap. The learning is semi-supervised:\nfor each pattern a \u201cteacher\u201d input is sent to the output neuron steering its activity to be high or low,\nas desired. At the end of the learning period the \u201cteacher\u201d is turned off and the perceptron output is\ndriven only by the input stimuli: in this conditions its classi\ufb01cation ability is tested.\n\nWe present learning performances for input patterns with increasing overlap, and demonstrate the\neffect of the stop learning mechanism (overlap ranging from 6 to 14 synapses).\n\nUpon stimulation active pre-synaptic inputs are poisson spike trains at 70 Hz, while inactive inputs\nare poisson spike trains at 10 Hz. Each trial lasts half a second. Up and Down patterns are randomly\npresented with equal probability. The teaching signal, a combination of an excitatory constant cur-\n\n4\n\n\frent and of an inhibitory AER spike train, forces the output \ufb01ring rate to 50 or 0.5 Hz. One run lasts\nfor 150 trials which is suf\ufb01cient for the stabilization of the output frequencies. At the end of each\ntrial we turn off the teaching signal, we freeze the synaptic dynamics and we read the state of each\nsynapse using an AER protocol developed for this purpose. In these conditions we performed a 5\nseconds test (\u201cChecking Phase\u201d) to measure the perceptron frequencies when pattern Up or pattern\nDown are presented. Each experiment includes 50 runs. For each run we change: a) the \u201cde\ufb01nition\u201d\nof patterns Up and Down: inputs activated by pattern Up and Down are chosen randomly at the\nbeginning of each run; b) the initial synaptic state, with the constraint that only about 30 % of the\nsynapses are potentiated; c) the stimulation sequence.\n\nFor the \ufb01rst experiment we turned off the stop learning mechanism and we chose orthogonal patterns.\nIn this case the perceptron was able to correctly classify the stimuli: after about 50 trials, choosing a\nsuitable threshold, one can discriminate the perceptron ouput to different patterns (lower left panel\non \ufb01gure 4). The output frequency separation slightly increases until trial number 100 remaining\nalmost stable after that point.\n\nWe then studied the case of overlapped patterns both with active and inactive Calcium mechanism.\nWe repeated the experiment with an increasing overlap: 6, 10 and 14. (implying an increase in\nthe coding level from 0.5 for the orthogonal case to 0.7 for the overlap equal to 14). Only the\nthreshold K up\nhigh is active (the threshold above which up jumps are inhibnited). The Calcium circuit\nparameters are tuned so that the Ca variable passes K up\nhigh for the mean \ufb01ring rate of the post-\nsynaptic neuron around 80 Hz. We show in \ufb01gure 3 the distributions of the potentiated fraction of\nthe synapses over the 50 runs at different stages along the run for overlap 10 with inactive (upper\npanels) and active (lower panels) calcium mechanism. We divided synapses in three subgroups: Up\n(red) synapses with pre-synaptic input activated solely by Up pattern, Down (blue) synapses with\npre-synaptic inputs activated only by Down pattern, and Overlap (green) synapses with pre-synpatic\ninputs activated by both pattern Up and Down. The state of the synapses is recorded after every\nlearning step. Accumulating statistics over the 50 runs we obtain the distributions reported in \ufb01gure\n3. The fraction of potentiated synapses is calculated over the number of synapses belonging to each\nsubgroup. When the stop learning mechanism is inactive, at the end of the experiment, the green\n\ntrial 2\n\n0.5\nw+\n\ntrial 2\n\n1\n\n0.8\n\n0.6\n\n0.4\n\n0.2\n\n0\n\n0\n\n1\n\n0.8\n\n0.6\n\n0.4\n\n0.2\n\n1\n\n0.8\n\n0.6\n\n0.4\n\n0.2\n\n1\n\n0\n\n0\n\n1\n\n0.8\n\n0.6\n\n0.4\n\n0.2\n\nCa mechanism inactive\ntrial 50\n\ntrial 100\n\n1\n\n0.8\n\n0.6\n\n0.4\n\n0.2\n\n0\n\n0\n\n1\n\n0.5\nw+\nCa mechanism active\ntrial 50\n\n1\n\n0.8\n\n0.6\n\n0.4\n\n0.2\n\n1\n\n0.8\n\n0.6\n\n0.4\n\n0.2\n\n1\n\n0\n\n0\n\n0.5\nw+\n\ntrial 100\n\n1\n\n0.8\n\n0.6\n\n0.4\n\n0.2\n\ntrial 150\n\nsynapses Overlap\nsynapses Up\nsynapses Down\n\n0.5\nw+\n\n1\n\ntrial 150\n\nsynapses Overlap\nsynapses Up\nsynapses Down\n\n \n \n)\n\n+\nw\nP\n\n(\n\n \n \n)\n\n+\nw\nP\n\n(\n\n0\n\n0\n\n0.5\nw+\n\n1\n\n0\n\n0\n\n0.5\nw+\n\n1\n\n0\n\n0\n\n0.5\nw+\n\n1\n\n0\n\n0\n\n0.5\nw+\n\n1\n\nFigure 3: Distribution of the fraction of potentiated synapses. The number of inputs belonging to\nboth patterns is 10.\n\ndistribution of overlap synapses is broad, when the Calcium mechanism is active, synapses overlap\ntend to be depotentiated. This result is the \u201cmicroscopic\u201d effect of the stop learning mechanism:\nonce the number of potentiated synapses is suf\ufb01cient to drive the perceptron output frequency above\n80 Hz, the overlap synapses tend to be depotentiated. Overlap synapses would be pushed half of the\n\n5\n\n\ftimes to the potentiated state and half of the times to the depressed state, so that it is more likely for\nthe Up synapses to reach earlier the potentiated state. When the stop learning mechanism is active,\nthe potentiated synapses are enogh to drive the output neuron about 80 Hz, further potentiation is\ninhibited for all synapses so that overlap synapses get depressed on average. This happens under the\ncondition that the transition probability are suf\ufb01ciently small to avoid that at each trial the learning is\ncompletely disrupted. The distribution of the output frequencies for increasing overlap is illustrated\nin \ufb01gure 4 (Ca mechanism inactive in the upper panels, active for the lower panels). The frequencies\nare recorded during the \u201cchecking phase\u201d. In blue the histograms of the output frequency for the\ndown pattern, in red those for up pattern.\nIt is clear from the \ufb01gure that the output frequency\ndistribution remain well separated even for high overlap when the Calcium mechanism is active.\n\nA quantitative parameter to describe the distribution separation is\n\n\u03b4 =\n\n\u03bdup \u2212 \u03bddn\n\u03c32\n\u03bdup + \u03c32\n\n\u03bddn\n\n(1)\n\n\u03b4 values are summarized in table 1.\n\nOverlap 0\n\nCa mechanism inactive\nOverlap 6\n\nOverlap 10\n\n1\n\n0.8\n\n0.6\n\n0.4\n\n0.2\n\n0\n\n0\n\n100\n\u03bd\nck\n\n [Hz]\n\n200\n\n1\n\n0.8\n\n0.6\n\n0.4\n\n0.2\n\n0\n\nOverlap 14\n\nPattern Down\nPattern Up\n\n1\n\n0.8\n\n0.6\n\n0.4\n\n0.2\n\n200\n\n0\n\n0\n\n100\n\u03bd\nck\n\n [Hz]\n\n200\n\n)\n\nk\nc\n\n\u03bd\n(\nP\n\n)\n\nk\nc\n\n\u03bd\n(\nP\n\n1\n\n0.8\n\n0.6\n\n0.4\n\n0.2\n\n0\n\n0\n\n1\n\n0.8\n\n0.6\n\n0.4\n\n0.2\n\n0\n\n0\n\n200\n\n0\n\n100\n\u03bd\nck\n\n [Hz]\n\n100\n\u03bd\nck\n\n [Hz]\nCa mechanism active\n\nOverlap 0\n\nOverlap 6\n\nOverlap 10\n\nOverlap 14\n\n1\n\n0.8\n\n0.6\n\n0.4\n\n0.2\n\n0\n\n0\n\n100\n\u03bd\nck\n\n [Hz]\n\n200\n\n1\n\n0.8\n\n0.6\n\n0.4\n\n0.2\n\n0\n\n0\n\n100\n\u03bd\nck\n\n [Hz]\n\n200\n\n1\n\n0.8\n\n0.6\n\n0.4\n\n0.2\n\n0\n\n0\n\nPattern Down\nPattern Up\n\n100\n\u03bd\nck\n\n [Hz]\n\n200\n\n100\n\u03bd\nck\n\n [Hz]\n\n200\n\nFigure 4: Distributions of perceptron frequencies after learning two overlapped patterns. Blue bars\nrefer to pattern Down stimulation, red bars refers to pattern Up. Each panel refers to overlap.\n\nTable 1: Discrimination power [seconds]\n\noverlap 0\n\noverlap 6\n\noverlap 10\n\noverlap 14\n\nCa OFF\nCa ON\n\n4.39\n5.29\n\n1.87\n2.20\n\n1.59\n1.88\n\n0.99\n1.66\n\nFor each run the number of potentiated synapses is different due to the random choices of Up, Down\nand Overlap synapses for each run and the mismatches affecting the behavior of different synapses.\nThe failure of the discrimination for high overlap in the absence of this stop learning mechanism\nis due to the fact that the number of potentiated synapses can overcome the effect of the teaching\nsignal for the down pattern. The Calcium mechanism, de\ufb01ning a maximum number of allowed\npotentiated synapses, limits this problem. This offer the possibility of establishing a priori threshold\nto discriminate the perceptron outputs on the basis of the frequency corresponding to the maximum\nvalue of the LTP probability curve.\n\n6\n\n\f5 Conclusions\n\nWe brie\ufb02y illustrate an analog VLSI chip implementing a network of 32 IF neurons and 2,048\nrecon\ufb01gurable, Hebbian, plastic, stop-learning synapses. Circuit parameters has been measured as\nwell as their dispersion across the chip. Using these data a chip-oriented simulation was set up and\nits results, compared to experimental ones, demonstrate that circuits behavior follow the theoretical\npredictions. Once con\ufb01gured the network as a perceptron (31 AER synapses and one output neuron),\na classi\ufb01cation task has been performed. Stimuli with an increasing overlap have been used. The\nresults show the ability of the network to ef\ufb01ciently classify the presented patterns as well as the\nimprovement of the performances due to the calcium stop-learning mechanism.\n\nReferences\n\n[1] D.J. Amit and S. Fusi. Neural Computation, 6:957, 1994.\n[2] D.J. Amit and G. Mongillo. Neural Computation, 15:565, 2003.\n[3] D. Badoni, M. Giulioni, V. Dante, and P. Del Giudice. In Proc. IEEE International Symposium\n\non Circuits and Systems ISCAS06, pages 1227\u20131230, 2006.\n\n[4] J.M. Brader, W. Senn, and S. Fusi. Neural Computation (in press), 2007.\n[5] V. Dante, P. Del Giudice, and A. M. Whatley. The neuromorphic engineer newsletter. 2005.\n[6] E. Chicca et al. IEEE Transactions on Neural Networks, 14(5):1297, 2003.\n[7] S. Fusi. Biological Cybernetics, 87:459, 2002.\n[8] S. Fusi, M. Annunziato, D. Badoni, A. Salamon, and D.J. Amit. Neural Computation, 12:2227,\n\n2000.\n\n[9] S. Fusi and M. Mattia. Neural Computation, 11:633, 1999.\n[10] P. Del Giudice, S. Fusi, and M. Mattia. Journal of Physiology Paris, 97:659, 2003.\n[11] G. Indiveri. In Proc. IEEE International Symposium on Circuits and Systems, 2003.\n[12] C. Mead. Analog VLSI and neural systems. Addison-Wesley, 1989.\n[13] W. Senn and S. Fusi. Neural Computation, 17:2106, 2005.\n\n7\n\n\f", "award": [], "sourceid": 707, "authors": [{"given_name": "Massimiliano", "family_name": "Giulioni", "institution": null}, {"given_name": "Mario", "family_name": "Pannunzi", "institution": null}, {"given_name": "Davide", "family_name": "Badoni", "institution": null}, {"given_name": "Vittorio", "family_name": "Dante", "institution": null}, {"given_name": "Paolo", "family_name": "Giudice", "institution": null}]}