{"title": "Spike Timing-Dependent Plasticity in the Address Domain", "book": "Advances in Neural Information Processing Systems", "page_first": 1171, "page_last": 1178, "abstract": null, "full_text": "Spike Timing-Dependent Plasticity\n\nin the Address Domain\n\nR. Jacob Vogelstein1, Francesco Tenore2, Ralf Philipp2, Miriam S. Adlerstein2,\n\nDavid H. Goldberg2 and Gert Cauwenberghs2\n\n1Department of Biomedical Engineering\n\n2Department of Electrical and Computer Engineering\n\nJohns Hopkins University, Baltimore, MD 21218\nfjvogelst,fra,rphilipp,mir,goldberg,gertg@jhu.edu\n\nAbstract\n\nAddress-event representation (AER), originally proposed as a means\nto communicate sparse neural events between neuromorphic chips, has\nproven ef\ufb01cient in implementing large-scale networks with arbitrary,\ncon\ufb01gurable synaptic connectivity. In this work, we further extend the\nfunctionality of AER to implement arbitrary, con\ufb01gurable synaptic plas-\nticity in the address domain. As proof of concept, we implement a bi-\nologically inspired form of spike timing-dependent plasticity (STDP)\nbased on relative timing of events in an AER framework. Experimen-\ntal results from an analog VLSI integrate-and-\ufb01re network demonstrate\naddress domain learning in a task that requires neurons to group corre-\nlated inputs.\n\n1 Introduction\n\nIt has been suggested that the brain\u2019s impressive functionality results from massively par-\nallel processing using simple and ef\ufb01cient computational elements [1]. Developments in\nneuromorphic engineering and address-event representation (AER) have provided an in-\nfrastructure suitable for emulating large-scale neural systems in silicon, e.g., [2, 3]. Al-\nthough an integral part of neuromorphic engineering since its inception [1], only recently\nhave implemented systems begun to incorporate adaptation and learning with biological\nmodels of synaptic plasticity.\n\nA variety of learning rules have been realized in neuromorphic hardware [4, 5]. These sys-\ntems usually employ circuitry incorporated into the individual cells, imposing constraints\non the nature of inputs and outputs of the implemented algorithm. While well-suited to\nsmall assemblies of neurons, these architectures are not easily scalable to networks of hun-\ndreds or thousands of neurons. Algorithms based both on continuous-valued \u201cintracellular\u201d\nsignals and discrete spiking events have been realized in this way, and while analog com-\nputations may be performed better at the cellular level, we argue that it is advantageous\nto implement spike-based learning rules in the address domain. AER-based systems are\ninherently scalable, and because the encoding and decoding of events is performed at the\nperiphery, learning algorithms can be arbitrarily complex without increasing the size of\nrepeating neural units. Furthermore, AER makes no assumptions about the signals repre-\n\n\fSender\n\nReceiver\n\nr\ne\nd\no\nc\nn\nE\n\nData bus\n\n203\n\n1\n\ntime\n\nr\ne\nd\no\nc\ne\nD\n\n0\n1\n2\n3\n\nREQ\nACK\n\n0\n1\n2\n3\n\nREQ\nACK\n\nFigure 1: Address-event representation. Sender events are encoded into an address, sent\nover the bus, and decoded. Handshaking signals REQ and ACK are required to ensure that\nonly one cell pair is communicating at a time. Note that the time axis goes from right to\nleft.\n\nsented as spikes, so learning can address any measure of cellular activity. This \ufb02exibility\ncan be exploited to achieve learning mechanisms with high degrees of biological realism.\n\nMuch previous work has focused on rate-based Hebbian learning (e.g., [6]), but recently,\nthe possibility of modifying synapses based on the timing of action potentials has been\nexplored in both the neuroscience [7, 8] and neuromorphic engineering disciplines [9]\u2013[11].\nThis latter hypothesis gives rise to the possibility of learning based on causality, as opposed\nto mere correlation. We propose that AER-based neuromorphic systems are ideally suited\nto implement learning rules founded on this notion of spike-timing dependent plasticity\n(STDP). In the following sections, we describe an implementation of one biologically-\nplausible STDP learning rule and demonstrate that table-based synaptic connectivity can be\nextended to table-based synaptic plasticity in a scalable and recon\ufb01gurable neuromorphic\nAER architecture.\n\n2 Address-domain architecture\n\nAddress-event representation is a communication protocol that uses time-multiplexing to\nemulate extensive connectivity [12] (Fig. 1). In an AER system, one array of neurons en-\ncodes its activity in the form of spikes that are transmitted to another array of neurons. The\n\u201cbrute force\u201d approach to communicating these signals would be to use one wire for each\npair of neurons, requiring N wires for N cell pairs. However, an AER system identi\ufb01es\nthe location of a spiking cell and encodes this as an address, which is then sent across a\nshared data bus. The receiving array decodes the address and routes it to the appropriate\ncell, reconstructing the sender\u2019s activity. Handshaking signals REQ and ACK are required\nto ensure that only one cell pair is using the data bus at a time. This scheme reduces the re-\nquired number of wires from N to (cid:24) log2 N. Two pieces of information uniquely identify\na spike: its location, which is explicitly encoded as an address, and the time that it occurs,\nwhich need not be explicitly encoded because the events are communicated in real-time.\nThe encoded spike is called an address-event.\n\nIn its original formulation, AER implements a one-to-one connection topology, which is\nappropriate for emulating the optic and auditory nerves [12, 13]. To create more complex\nneural circuits, convergent and divergent connectivity is required. Several authors have\ndiscussed and implemented methods of enhancing the connectivity of AER systems to\nthis end [14]\u2013[16]. These methods call for a memory-based projective \ufb01eld mapping that\nenables routing an address-event to multiple receiver locations.\n\nThe enhanced AER system employed in this paper is based on that of [17], which en-\n\n\f0\n\n1\n\n2\n\n0\n1\n2\n0\n0\n0\n1\n2\n0\n1\n2\n\nLook-up table\n\n\u2018\u2018Sender\u2019\u2019\n\n\u2018\u2018Receiver\u2019\u2019\n\n0\n\n1\n\n2\n\n0\n\n1\n\n2\n\n3\n-1\n\n8\n\n4\n\n(a)\n\n-\n-\n\n0 1 3\n-\n-\n-\n-\n0 0 1\n2 1 8\n-\n-\n2 1 4\n-\n-\n-\n-\n\n-\n\n-\n-\n\nSender address\nSynapse index\nReceiver address\nWeight polarity\nWeight magnitude\n\nREQ\nPOL\n\nr\ne\nd\no\nc\ne\nD\n\n0\n\n1\n\n2\n\nr\ne\nd\no\nc\nn\nE\n\nIntegrate-and-fire array\n\nEG\n\n(b)\n\nFigure 2: Enhanced AER for implementing complex neural networks. (a) Example neural\nnetwork. The connections are labeled with their weight values. (b) The network in (a) is\nmapped to the AER framework by means of a look-up table.\n\nables continuous-valued synaptic weights by means of graded (probabilistic or determinis-\ntic) transmission of address-events. This architecture employs a look-up table (LUT), an\nintegrate-and-\ufb01re address-event transceiver (IFAT), and some additional support circuitry.\nFig. 2 shows how an example two-layer network can be mapped to the AER framework.\nEach row in the table corresponds to a single synaptic connection\u2014it contains information\nabout the sender location, the receiver location, the connection polarity (excitatory or in-\nhibitory), and the connection magnitude. When a spike is sent to the system, the sender\naddress is used as an index into the LUT and a signal activates the event generator (EG)\ncircuit. The EG scrolls through all the table entries corresponding to synaptic connections\nfrom the sending neuron. For each synapse, the receiver address and the spike polarity\nare sent to the IFAT, and the EG initiates as many spikes as are speci\ufb01ed in the weight\nmagnitude \ufb01eld.\n\nEvents received by the IFAT are temporally and spatially integrated by analog circuitry.\nEach integrate-and-\ufb01re cell receives excitatory and inhibitory inputs that increment or\ndecrement the potential stored on an internal capacitance. When this potential exceeds\na given threshold, the cell generates an output event and broadcasts its address to the AE\narbiter. The physical location of neurons in the array is inconsequential as connections are\nrouted through the LUT, which is implemented in random-access memory (RAM) outside\nof the chip.\n\nAn interesting feature of the IFAT is that it is insensitive to the timescale over which events\noccur. Because internal potentials are not subject to decay, the cells\u2019 activities are only\nsensitive to the order of the events. Effects of leakage current in real neurons are emulated\nby regularly sending inhibitory events to all of the cells in the array. Modulating the timing\nof the \u201cglobal decay events\u201d allows us to dynamically warp the time axis.\n\nWe have designed and implemented a prototype system that uses the IFAT infrastructure\nto implement massively connected, recon\ufb01gurable neural networks. An example setup is\ndescribed in detail in [17], and is illustrated in Fig. 3. It consists of a custom VLSI IFAT\nchip with a 1024-neuron array, a RAM that stores the look-up table, and a microcontroller\nunit (MCU) that realizes the event generator.\n\nAs discussed in [18, p. 91], a synaptic weight w can be expressed as the combined effect\n\n\fSender\naddress\n\nWeight polarity\nReceiver address\n\nS\nS\nE\nR\nD\nD\nA\n\nRAM\n\nA\nT\nA\nD\n\nSender\naddress\n\nWeight polarity\nReceiver address\n\nS\nS\nE\nR\nD\nD\nA\n\nRAM\n\nA\nT\nA\nD\n\nIN\n\nPOL\n\nIN\n\nOUT\n\nIFAT\n\nOUT\n\nPOL\n\nIN\n\nOUT\n\nIFAT\n\nOUT\n\nSynapse\nindex\n\nMCU\n\nWeight\nmagnitude\n\nSynapse\nindex\n\nMCU\n\nWeight\nmagnitude\n\nPC board\n\nPC board\n\n(a)\n\n(b)\n\nFigure 3: Hardware implementation of enhanced AER. The elements are an integrate-and-\n\ufb01re array transceiver (IFAT) chip, a random-access memory (RAM) look-up table, and a\nmicrocontroller unit (MCU). (a) Feedforward mode. Input events are routed by the RAM\nlook-up table, and integrated by the IFAT chip. (b) Recurrent mode. Events emitted by the\nIFAT are sent to the look-up table, where they are routed back to the IFAT. This makes\nvirtual connections between IFAT cells.\n\nof three physical mechanisms:\n\nw = npq\n\n(1)\nwhere n is the number of quantal neurotransmitter sites, p is the probability of synaptic\nrelease per site, and q is the measure of the postsynaptic effect of the synapse. Many early\nneural network models held n and p constant and attributed all of the variability in the\nweight to q. Our architecture is capable of varying all three components: n by sending\nmultiple events to the same receiver location, p by probabilistically routing the events (as\nin [17]), and q by varying the size of the potential increments and decrements in the IFAT\ncells.\nIn the experiments described in this paper, the transmission of address-events is\ndeterministic, and the weight is controlled by varying the number of events per synapse,\ncorresponding to a variation in n.\n\n3 Address-domain learning\n\nThe AER architecture lends itself to implementations of synaptic plasticity, since informa-\ntion about presynaptic and postsynaptic activity is readily available and the contents of the\nsynaptic weight \ufb01elds in RAM are easily modi\ufb01able \u201con the \ufb02y.\u201d As in biological systems,\nsynapses can be dynamically created and pruned by inserting or deleting entries in the LUT.\n\nLike address domain connectivity, the advantage of address domain plasticity is that the\nconstituents of the implemented learning rule are not constrained to be local in space or\ntime. Various forms of learning algorithms can be mapped onto the same architecture by\nrecon\ufb01guring the MCU interfacing the IFAT and the LUT.\n\nBasic forms of Hebbian learning can be implemented with no overhead in the address do-\nmain. When a presynaptic event, routed by the LUT through the IFAT, elicits a postsynaptic\nevent, the synaptic strength between the two neurons is simply updated by incrementing the\ndata \ufb01eld of the LUT entry at the active address location. A similar strategy can be adopted\nfor other learning rules of the incremental outer-product type, such as delta-rule or back-\npropagation supervised learning.\n\nNon-local learning rules require control of the LUT address space to implement spatial\nand/or temporal dependencies. Most interesting from a biological perspective are forms of\n\n\fPresynaptic Queue\n\nt +\n\nDw\n\nx1\n\nx3\n\nx2\n\nx1\n\nx3 x3\n\nx1\n\nx2\n\nx1\n\nx2\n\nDw(tpre - tpost)\n\nt-\n\ntpre - tpost\n\nPresynaptic\n\nPostsynaptic\n\nPresynaptic\n\nPostsynaptic\n\nx1\nx2\nx3\ny\n\nx\ny1y1\ny2\n\ny2\n\ny1y1\ny1\n\ny1\n\ny1y1\n\ny2 y2y2\n\ny1 y2\n\nPostsynaptic Queue\n\nt\n\nt\n\nDw\n\n-t+\n\npresynaptic\n\npostsynaptic\n\nDw\n\n(a)\n\n(b)\n\nFigure 4: Spike timing-dependent plasticity (STDP) in the address domain.\n(a) Synaptic\nupdates (cid:1)w as a function of the relative timing of presynaptic and postsynaptic events, with\nasymmetric windows of anti-causal and causal regimes (cid:28)(cid:0) > (cid:28)+.\n(b) Address-domain\nimplementation using presynaptic (top) and postsynaptic (bottom) event queues of window\nlengths (cid:28)+ and (cid:28)(cid:0).\n\nspike timing-dependent plasticity (STDP).\n\n4 Spike timing-dependent plasticity\n\nLearning rules based on STDP specify changes in synaptic strength depending on the time\ninterval between each pair of presynaptic and postsynaptic events. \u201cCausal\u201d postsynaptic\nevents that succeed presynaptic action potentials (APs) by a short duration of time poten-\ntiate the synaptic strength, while \u201canti-causal\u201d presynaptic events succeeding postsynaptic\nAPs by a short duration depress the synaptic strength. The amount of strengthening or\nweakening is dependent on the exact time of the event within the causal or anti-causal\nregime, as illustrated in Fig. 4 (a). The weight update has the form\n\n(cid:1)w =( (cid:0)(cid:17)[(cid:28)(cid:0) (cid:0) (tpre (cid:0) tpost)]\n\n(cid:17)[(cid:28)+ + (tpre (cid:0) tpost)]\n0\n\n0 (cid:20) tpre (cid:0) tpost (cid:20) (cid:28)(cid:0)\n(cid:0)(cid:28)+ (cid:20) tpre (cid:0) tpost (cid:20) 0\notherwise\n\n(2)\n\nwhere tpre and tpost denote time stamps of presynaptic and postsynaptic events.\nFor stable learning, the time windows of causal and anti-causal regimes (cid:28)+ and (cid:28)(cid:0) are\nsubject to the constraint (cid:28)+ < (cid:28)(cid:0). For more general functional forms of STDP (cid:1)w(tpre (cid:0)\ntpost), the area under the synaptic modi\ufb01cation curve in the anti-causal regime must be\ngreater than that in the causal regime to ensure convergence of the synaptic strengths [7].\n\nThe STDP synaptic modi\ufb01cation rule (2) is implemented in the address domain by aug-\nmenting the AER architecture with two event queues, one each for presynaptic and post-\nsynaptic events, shown in Figure 4 (b). Each time a presynaptic event is generated, the\nsender\u2019s address is entered into a queue with an associated value of (cid:28)+. All values in the\nqueue are decremented every time a global decay event is observed, marking one unit of\ntime T . A postsynaptic event triggers a sequence of synaptic updates by iterating back-\nwards through the queue to \ufb01nd the causal spikes, in turn locating the synaptic strength en-\ntries in the LUT corresponding to the sender addresses and synaptic index, and increasing\n\n-\nt\n-\n\fx1\nx2\nx3\nx4\nx5\n\nx16\nx17\nx18\nx19\nx20\n\ny\n\nFigure 5: Pictorial representation of our experimental neural network, with actual spike\ntrain data sent from the workstation to the \ufb01rst layer. All cells are identical, but x18 : : : x20\n(shaded) receive correlated inputs. Activity becomes more sparse in the hidden and output\nlayers as the IFAT integrates spatiotemporally. Note that connections are virtual, speci\ufb01ed\nin the RAM look-up-table.\n\nthe synaptic strengths in the LUT according to the values stored in the queue. Anti-causal\nevents require an equivalent set of operations, matching each incoming presynaptic spike\nwith a second queue of postsynaptic events. In this case, entries in the queue are initialized\nwith a value of (cid:28)(cid:0) and decremented after every interval of time T between decay events,\ncorresponding to the decrease in strength to be applied at the presynaptic/postsynaptic pair.\n\nWe have chosen a particularly simple form of the synaptic modi\ufb01cation function (2) as\nproof of principle in the experiments. More general functions can be implemented by a\ntable that maps time bins in the history of the queue to speci\ufb01ed values of (cid:1)w(nT ), with\npositive values of n indexing the postsynaptic queue, and negative values indexing the\npresynaptic queue.\n\n5 Experimental results\n\nWe have implemented a Hebbian spike timing-based learning rule on a network of 21 neu-\nrons using the IFAT system (Fig. 5). Each of the 20 neurons in the input layer is driven by\nan externally supplied, randomly generated list of events. Suf\ufb01ciently high levels of input\ncause these neurons to produce spikes that subsequently drive the output layer. All events\nare communicated over the address-event bus and are monitored by a workstation com-\nmunicating with the MCU and RAM. As shown in [7], temporally asymmetric Hebbian\nlearning using STDP is useful for detecting correlations between inputs. We have proved\nthat this can be accomplished in hardware in the address domain by presenting the network\nwith stimulus patterns containing a set of correlated inputs and a set of uncorrelated inputs:\nneurons x1 : : : x17 are all stimulated independently with a probability of 0:05 per unit of\ntime, while neurons x18 : : : x20 have the same likelihood of stimulation but are always ac-\ntivated together. Thus, over a suf\ufb01ciently long period of time each neuron in the input layer\nwill receive the same amount of activation, but the correlated group will \ufb01re synchronous\nspikes more frequently than any other combination of neurons.\n\nIn the implemented learning rule (2), causal activity results in synaptic strengthening and\nanti-causal activity results in synaptic weakening. As described in Section 4, for an anti-\ncausal regime (cid:28)(cid:0) larger than the causal regime (cid:28)+, random activity results in overall weak-\n\n\fMaximum Strength = 31\n\n35\n\n30\n\nh\n\nt\n\ng\nn\ne\nr\nt\n\nS\n \nc\ni\nt\n\np\na\nn\ny\nS\n\n25\n\n20\n\n15\n\n10\n\nMaximum Strength = 31\n\n35\n\n30\n\nh\n\nt\n\ng\nn\ne\nr\nt\n\nS\n \nc\ni\nt\n\np\na\nn\ny\nS\n\n25\n\n20\n\n15\n\n10\n\n5\n\n0\n\n1\n\nSynapse Address\n\n(a)\n\n5\n\n0\n\n1\n\n20\n\nSynapse Address\n\n(b)\n\n20\n\nFigure 6: Experimental synaptic strengths in the second layer, recorded from the IFAT\nsystem after the presentation of 200,000 input events.\n(a) Typical experimental run. (b)\nAverage (+SE) over 20 experimental runs.\n\nening of a synapse. All synapses connecting the input and output layers are equally likely\nto be active during an anti-causal regime. However, the increase in average contribution\nto the postsynaptic membrane potential for the correlated group of neurons renders this\npopulation slightly more likely to be active during the causal regime than any single mem-\nber of the uncorrelated group. Therefore, the synaptic strengths for this group of neurons\nwill increase with respect to the uncorrelated group, further augmenting their likelihood\nof causing a postsynaptic spike. Over time, this positive feedback results in a random but\nstable distribution of synaptic strengths in which the correlated neurons\u2019 synapses form\nthe strongest connections and the remaining neurons are distributed around an equilibrium\nvalue for weak connections.\nIn the experiments, we have chosen (cid:28)+ = 3 and (cid:28)(cid:0) = 6. An example of a typical dis-\ntribution of synaptic strengths recorded after 200,000 events have been processed by the\ninput layer is shown in Fig. 6 (a). For the data shown, synapses driving the input layer were\n\ufb01xed at the maximum strength (+31), the rate of decay was (cid:0)4 per unit of time, and the\nplastic synapses between the input and output layers were all initialized to +8. Because\nthe events sent from the workstation to the input layer are randomly generated, \ufb02uctuations\nin the strengths of individual synapses occur consistently throughout the operation of the\nsystem. Thus, the \ufb01nal distribution of synaptic weights is different each time, but a pattern\ncan be clearly discerned from the average value of synaptic weights after 20 separate trials\nof 200,000 events each, as shown in Fig. 6 (b).\n\nThe system is robust to changes in various parameters of the spike timing-based learning\nalgorithm as well as to modi\ufb01cations in the number of correlated, uncorrelated, and total\nneurons (data not shown). It also converges to a similar distribution regardless of the initial\nvalues of the synaptic strengths (with the constraint that the net activity must be larger than\nthe rate of decay of the voltage stored on the membrane capacitance of the output neuron).\n\n6 Conclusion\n\nWe have demonstrated that the address domain provides an ef\ufb01cient representation to im-\nplement synaptic plasticity that depends on the relative timing of events. Unlike dedicated\nhardware implementations of learning functions embedded into the connectivity, the ad-\ndress domain implementation allows for learning rules with interactions that are not con-\nstrained in space and time. Experimental results veri\ufb01ed this for temporally-antisymmetric\nHebbian learning, but the framework can be extended to general learning rules, including\nreward-based schemes [10].\n\n\fThe IFAT architecture can be augmented to include sensory input, physical nearest-\nneighbor connectivity between neurons, and more realistic biological models of neural\ncomputation. Additionally, integrating the RAM and IFAT into a single chip will allow for\nincreased computational bandwidth. Unlike a purely digital implementation or software\nemulation, the AER framework preserves the continuous nature of the timing of events.\n\nReferences\n[1] C. Mead, Analog VLSI and Neural Systems. Reading, Massachusetts: Addison-Wesley, 1989.\n[2] S. R. Deiss, R. J. Douglas, and A. M. Whatley, \u201cA pulse-coded communications infrastructure\nfor neuromorphic systems,\u201d in Pulsed Neural Networks (W. Maas and C. M. Bishop, eds.),\npp. 157\u2013178, Cambridge, MA: MIT Press, 1999.\n\n[3] K. Boahen, \u201cA retinomorphic chip with parallel pathways: Encoding INCREASING, ON,\nDECREASING, and OFF visual signals,\u201d Analog Integrated Circuits and Signal Processing,\nvol. 30, pp. 121\u2013135, February 2002.\n\n[4] G. Cauwenberghs and M. A. Bayoumi, eds., Learning on Silicon: Adaptive VLSI Neural Sys-\n\ntems. Norwell, MA: Kluwer Academic, 1999.\n\n[5] M. A. Jabri, R. J. Coggins, and B. G. Flower, Adaptive analog VLSI neural systems. London:\n\nChapman & Hall, 1996.\n\n[6] T. J. Sejnowski, \u201cStoring covariance with nonlinearly interacting neurons,\u201dJournal of Mathe-\n\nmatical Biology, vol. 4, pp. 303\u2013321, 1977.\n\n[7] S. Song, K. D. Miller, and L. F. Abbott, \u201cCompetitive Hebbian learning through spike-timing-\n\ndependent synaptic plasticity,\u201dNature Neuroscience, vol. 3, no. 9, pp. 919\u2013926, 2000.\n\n[8] M. C. W. van Rossum, G. Q. Bi, and G. G. Turrigiano, \u201cStable Hebbian learning from spike\n\ntiming-dependent plasticity,\u201dJournal of Neuroscience, vol. 20, no. 23, pp. 8812\u20138821, 2000.\n\n[9] P. Ha\ufb02iger and M. Mahowald, \u201cSpike based normalizing Hebbian learning in an analog VLSI ar-\nti\ufb01cial neuron,\u201d in Learning On Silicon (G. Cauwenberghs and M. A. Bayoumi, eds.), pp. 131\u2013\n142, Norwell, MA: Kluwer Academic, 1999.\n\n[10] T. Lehmann and R. Woodburn, \u201cBiologically-inspired on-chip learning in pulsed neural net-\nworks,\u201dAnalog Integrated Circuits and Signal Processing, vol. 18, no. 2-3, pp. 117\u2013131, 1999.\n[11] A. Bo\ufb01ll, A. F. Murray, and D. P. Thompson, \u201cCircuits for VLSI implementation of temporally-\nasymmetric Hebbian learning,\u201d in Advances in Neural Information Processing Systems 14\n(T. Dietterich, S. Becker, and Z. Ghahramani, eds.), Cambridge, MA: MIT Press, 2002.\n\n[12] M. Mahowald, An analog VLSI system for stereoscopic vision. Boston: Kluwer Academic\n\nPublishers, 1994.\n\n[13] J. Lazzaro, J. Wawrzynek, M. Mahowald, M. Sivilotti, and D. Gillespie, \u201cSilicon auditory pro-\ncessors as computer peripherals,\u201d IEEE Trans. Neural Networks, vol. 4, no. 3, pp. 523\u2013528,\n1993.\n\n[14] K. A. Boahen, \u201cPoint-to-point connectivity between neuromorphic chips using address events,\u201d\nIEEE Trans. Circuits and Systems\u2014II: Analog and Digital Signal Processing, vol. 47, no. 5,\npp. 416\u2013434, 2000.\n\n[15] C. M. Higgins and C. Koch, \u201cMulti-chip neuromorphic motion processing,\u201d inProceedings\n20th Anniversary Conference on Advanced Research in VLSI(D. Wills and S. DeWeerth, eds.),\n(Los Alamitos, CA), pp. 309\u2013323, IEEE Computer Society, 1999.\n\n[16] S.-C. Liu, J. Kramer, G. Indiveri, T. Delbr\u00a8uck, and R. Douglas, \u201cOrientation-selective aVLSI\nspiking neurons,\u201d in Advances in Neural Information Processing Systems 14 (T. Dietterich,\nS. Becker, and Z. Ghahramani, eds.), Cambridge, MA: MIT Press, 2002.\n\n[17] D. H. Goldberg, G. Cauwenberghs, and A. G. Andreou, \u201cProbabilistic synaptic weighting in a\nrecon\ufb01gurable network of VLSI integrate-and-\ufb01re neurons,\u201d Neural Networks, vol. 14, no. 6/7,\npp. 781\u2013793, 2001.\n\n[18] C. Koch, Biophysics of Computation: Information Processing in Single Neurons. New York:\n\nOxford University Press, 1999.\n\n\f", "award": [], "sourceid": 2190, "authors": [{"given_name": "R.", "family_name": "Vogelstein", "institution": null}, {"given_name": "Francesco", "family_name": "Tenore", "institution": null}, {"given_name": "Ralf", "family_name": "Philipp", "institution": null}, {"given_name": "Miriam", "family_name": "Adlerstein", "institution": null}, {"given_name": "David", "family_name": "Goldberg", "institution": null}, {"given_name": "Gert", "family_name": "Cauwenberghs", "institution": null}]}