{"title": "A VLSI Implementation of the Adaptive Exponential Integrate-and-Fire Neuron Model", "book": "Advances in Neural Information Processing Systems", "page_first": 1642, "page_last": 1650, "abstract": "We describe an accelerated hardware neuron being capable of emulating the adap-tive exponential integrate-and-fire neuron model. Firing patterns of the membrane stimulated by a step current are analyzed in transistor level simulation and in silicon on a prototype chip. The neuron is destined to be the hardware neuron of a highly integrated wafer-scale system reaching out for new computational paradigms and opening new experimentation possibilities. As the neuron is dedicated as a universal device for neuroscientific experiments, the focus lays on parameterizability and reproduction of the analytical model.", "full_text": "A VLSI Implementation of the Adaptive Exponential\n\nIntegrate-and-Fire Neuron Model\n\nSebastian Millner, Andreas Gr\u00a8ubl, Karlheinz Meier,\n\nJohannes Schemmel and Marc-Olivier Schwartz\n\nKirchhoff-Institut f\u00a8ur Physik\n\nRuprecht-Karls-Universit\u00a8at Heidelberg\n\nsmillner@kip.uni-heidelberg.de\n\nAbstract\n\nWe describe an accelerated hardware neuron being capable of emulating the adap-\ntive exponential integrate-and-\ufb01re neuron model. Firing patterns of the membrane\nstimulated by a step current are analyzed in transistor level simulations and in\nsilicon on a prototype chip. The neuron is destined to be the hardware neuron\nof a highly integrated wafer-scale system reaching out for new computational\nparadigms and opening new experimentation possibilities. As the neuron is dedi-\ncated as a universal device for neuroscienti\ufb01c experiments, the focus lays on pa-\nrameterizability and reproduction of the analytical model.\n\n1\n\nIntroduction\n\nSince the beginning of neuromorphic engineering [1, 2] designers have had great success in build-\ning VLSI1 neurons mimicking the behavior of biological neurons using analog circuits [3\u20138]. The\ndesign approaches are quite different though, as the desired functions constrain the design.\nIt has been argued [4] whether it is best to emulate an established model or to create a new one using\nanalog circuits. The second way is gone by [3\u20137] for instance, aiming at the low power consump-\ntion and fault tolerance of neural computation to be used in a computational device in robotics for\nexample. This can be done most effectively by the technology-driven design of a new model, \ufb01tted\ndirectly to biological results. We approach gaining access to the computational power of neural sys-\ntems and creating a device being able to emulate biologically relevant spiking neural networks that\ncan be reproduced in a traditional simulation environment for modeling. The use of a commonly\nknown model enables modelers to do experiments on neuromorphic hardware and compare them\nto simulations. This design methodology has been applied successfully in [8, 9], implementing the\nconductance-based integrate-and-\ufb01re model [10]. The software framework PyNN [11, 12] even al-\nlows for directly switching between a simulator and the neuromorphic hardware device, allowing\nmodelers to access the hardware on a high level without knowing all implementation details.\nThe hardware neuron presented here can emulate the adaptive exponential integrate-and-\ufb01re neuron\nmodel (AdEx) [13], developed within the FACETS-project [14]. The AdEx model can produce\ncomplex \ufb01ring patterns observed in biology [15], like spike-frequency-adaptation, bursting, regular\nspiking, irregular spiking and transient spiking by tuning a limited number of parameters [16].\n\n1Very large scale integration\n\n1\n\n\fCompleted by the reset conditions, the model can be described by the following two differential\nequations for the membrane voltage V and the adaptation variable w:\n\n\u2212Cm\n\u2212\u03c4w\n\ndV\ndt\ndw\ndt\n\n= gl(V \u2212 E1) \u2212 gl\u2206te( V \u2212Vt\n= w \u2212 a(V \u2212 El).\n\n\u2206t ) + ge(t)(V \u2212 Ee) + gi(t)(V \u2212 Ei) + w;\n\n(1)\n\n(2)\n\nCm, gl, ge and gi are the membrane capacitance, the leakage conductance and the conductances for\nexcitatory and inhibitory synaptic inputs, where ge and gi depend on time and the inputs from other\nneurons. El, Ei and Ee are the leakage reversal potential and the synaptic reversal potentials. The\nparameters Vt and \u2206t are the effective threshold potential and the threshold slope factor. The time\nconstant of the adaptation variable is \u03c4w and a is called adaptation parameter. It has the dimension\nof a conductance.\nIf the membrane voltage crosses a certain threshold voltage \u0398, the neuron is reset:\n\nV \u2192 Vreset;\nw \u2192 w + b.\n\n(3)\n(4)\n\nThe parameter b is responsible for spike-triggered adaptation. Due to the sharp rise, created by\nthe exponential term in equation 1, the exact value of \u0398 is not critical for the determination of the\nmoment of a spike [13].\n\nFigure 1: Phase plane of the AdEx model with parameters according to \ufb01gure 4 d) from [16],\nstimulus excluded. V and w will be rising below their nullclines and falling above.\n\nFigure 1 shows the phase plane of the AdEx model with its nullclines. The nullcline of a variable is\nthe cline, where its time derivative is zero. The crossing off the nullclines in the left is the stable \ufb01x-\npoint, where the trajectory is located in rest. A constant current stimulus will lift the V -nullcline. For\nV > Vt below the V -nullcline, the derivative of V is proportional to V - the exponential dominates\nand V diverges until \u0398 is reached.\nThe neuron is integrated on a prototype chip called HICANN2 [17\u201319] (\ufb01gure 2) which has been\nproduced in 2009. Each HICANN contains 512 dendrite membrane (DenMem) circuits (\ufb01gure 3),\neach being connected to 224 dynamic input synapses. Neurons are built of DenMems by shorting\ntheir membrane capacitances gaining up to 14336 input synapses for a single neuron. The HICANN\nis prepared for integration in the FACETS wafer-scale system [17\u201319] allowing to interconnect 384\nHICANNs on an uncut silicon wafer via a high speed bus system, so networks of up to 196 608\nneurons can be emulated on a single wafer.\nA major feature of the described hardware neuron is that the size of components allows working with\nan acceleration factor of 103 up to 105 compared to biological real time, enabling the operator to do\nseveral runs of an experiment in a short time to do large parameter sweeps and gain better statistics.\nEffects occurring on a longer timescale like long term synaptic plasticity could be emulated. This\nway the wafer-scale system can emerge as an alternative and an enhancement to traditional computer\nsimulations in neuroscience. Another VLSI neuron designed with a time scaling factor is presented\nin [7]. This implementation is capable of reproducing lots of different \ufb01ring patterns of cortical\nneurons, but has no direct correspondence to a neuron from the modeling area.\n\n2High Input Count Analog Neural Network\n\n2\n\n-100 0 100 200 300 400-60-55-50-45-40w[pA]V[mV]V-nullclinew-nullclineV=Vt\fFigure 2: Photograph of the HICANN-chip\n\n2 Neuron implementation\n\n2.1 Neuron\n\nThe smallest part of a neuron is a DenMem, which implements the terms of the AdEx neuron de-\nscribed above. Each term is constructed by a single circuit using operational ampli\ufb01ers (OP) and\noperational transconductance ampli\ufb01ers (OTA) and can be switched off separately, so less complex\nmodels like the leaky integrate-and-\ufb01re model implemented in [9] can be emulated. OTAs directly\nmodel conductances for small input differences. The conductance is proportional to a biasing cur-\nrent. A \ufb01rst, not completely implemented version of the neuron has been proposed in [17]. Some\nsimulation results of the actual neuron can be found in [19].\n\nFigure 3: Schematic diagram of AdEx neuron circuit\n\nFigure 3 shows a block diagram of a DenMem. During normal operation, the neuron gets rectangular\nshaped current pulses as input from the synapse array (\ufb01gure 2) at one of the two synaptic input\ncircuits. Inside these circuits the current is integrated by a leaky integrator OP-circuit resulting in\na voltage that is transformed to a current by an OTA. Using this current as bias for another OTA, a\nsharply rising and exponentially decaying synaptic input conductance is created. Each DenMem is\nequipped with two synaptic input circuits, each having its own connection to the synapse array.\nThe output of a synapse can be chosen between them, which allows for two independent synaptic\nchannels which could be inhibitory or excitatory.\nThe leakage term of equation 1 can be implemented directly using an OTA, building a conductance\nbetween the leakage potential El and the membrane voltage V .\nReplacing the adaptation variable w in equation 2 by a(Vadapt \u2212 El), results in:\n\n\u2212\u03c4w\n\ndVadapt\n\ndt\n\n= Vadapt \u2212 V.\n\n3\n\n(5)\n\nbus systemsynapsearrayneuronblockfloating gates10 mmCMembraneMembraneExpAdaptIn/OutLeakSynInSynInResetInputInputSpiking/ConnectionNeighbour-NeuronsCurrent-InputMembrane-OutputSTDP/NetworkSpikesVReset\fNow the time constant \u03c4w shall be created by a capacitance Cadapt and a conductance gadapt and\nwe get:\n\n\u2212Cadapt\n\ndVadapt\n\n= gadapt(Vadapt \u2212 V ).\n\ndt\n\n(6)\n\n(7)\n\nWe need to transform b into a voltage using the conductance a and get\n\nCadaptIbtpulse =\n\nb\na\n\nwhere the \ufb01xed tpulse is the time a current Ib increases Vadapt on Cadapt at each detected spike of a\nneuron. These resulting equations for adaptation can be directly implemented as a circuit.\nA MOSFET3 connected as a diode is used to emulate the exponential positive feedback of equation 1\n(\ufb01gure 4). To generate the correct gate source voltage, a non inverting ampli\ufb01er multiplies the\ndifference between the membrane voltage and a voltage Vt by an adjustable factor. A simpli\ufb01ed\nversion of the circuit can be seen in \ufb01gure 4. The gate source voltage of M1 is :\n\nVGSM1 =\n\n(V \u2212 Vt)\n\nR1\nR2\n\n(8)\n\nDeployed in the equation for a MOSFET in sub-threshold mode this results in a current depending\nexponentially on V following equation 1 where \u2206t can be adjusted via the resistors R1 and R2. The\nfactor in front of the exponential gl\u2206t and Vt of the model can be changed by moving the circuits\nVt. To realize huge (hundreds of k\u2126) variable resistors, the slope of the output characteristic of a\nMOSFET biased in saturation is used as replacement for R1.\n\nFigure 4: Simpli\ufb01ed schematic of the exponential circuit\n\nOur neuron detects a spike at a directly adjustable threshold voltage \u0398 - this is especially necessary\nas the circuit cannot only implement the AdEx model, but also less complex models. In a model\nwithout a sharp spike, like the one created by the positive feedback of the exponential term, spike\ntiming very much depends on the exact voltage \u0398.\nA detected spike triggers reseting of the membrane by a current pulse to a potential Vreset for an\nadjustable time. Therefore our circuit supports basic modeling of a refractory period additionally to\nthe modeling by the adaption variable.\n\n2.2 Parameterization\n\nIn contrast to most other systems, we are using analog \ufb02oating gate memories similar to [20] as\nstorage device for the analog parameters of a neuron. Due to the small size of these cells, we are\ncapable of providing most parameters individually for a single DenMem circuit. This way, matching\nissues can be counterbalanced, and different types of neurons can be implemented on a single chip\nenhancing the universality of the wafer-scale system.\nTable 1 shows the parameters used in the implemented AdEx model and the parameter ranges aimed\nduring design. Technical biasing parameters and parameters of the synaptic input circuits are ex-\ncluded. Parameter ranges of several orders of magnitude are necessary, as our neurons can work in\ndifferent time scalings relative to real time. This is achieved by switching between different mul-\ntiplication factors for biasing currents. As these switches are parameterized globally, ranges of a\nparameter of a neuron group(one quarter of a HICANN) need to be in the same order of magnitude.\n\n3metal-oxide-semiconductor \ufb01eld-effect transistor\n\n4\n\n\fTable 1: Neuron parameters\n\nPARAMETER SHARING\n\nRANGE\n\ngl\na\ngadapt\nIb\ntpulse\nVreset\nVexp\ntreset\nCmem\nCadapt\n\u2206t\n\u0398\n\nindividual\nindividual\nindividual\nindividual\n\ufb01xed\nglobal\nindividual\nglobal\nglobal\n\ufb01xed\nindividual\nindividual\n\n34 nS..4 \u00b5S\n34 nS..4 \u00b5S\n5 nS..2 \u00b5S\n200 nA..5 \u00b5A\n18 ns\n0 V..1.8 V\n0 V..1.8 V\n25 ns..500 ns\n400 fF or 2 pF\n2 pF\n..10 mV..\n0 V..1.8 V\n\nAs a starting point for for the parameter ranges, [13] and [21] have been used. The chosen ranges\nallow leakage time constants \u03c4mem = Cmem/gl at an acceleration factor of 104 between 1 ms and\n588 ms and an adaptation time constant \u03c4w between 10 ms and 5 s in terms of biological real time.\nSo the parameters used in [22] are easily reached for instance. Switching to other acceleration\nmodes, the regime for a biologically realistic operation is reduced as the needed time constants are\nshifted one order of magnitude.\nAs OTAs are used for modeling conductances, and linear operation for this type of devices can\nonly be achieved for smaller voltage differences, it is necessary to limit the operating range of the\nvariables V and Vadapt to some hundreds of millivolts. If this area is left, the OTAs will not work as\na conductance anymore, but as a constant current, hence there will not be any more spike triggered\nadaptation for example.\nA neuron can be composed of up to 64 DenMem circuit hence several different adaptation variables\nwith different time constants for each are allowed.\n\n2.3 Parameter mapping\n\nFor a given set of parameters from the AdEx model, we want to reproduce the exact same behavior\nwith our hardware neuron. Therefore, a simple two-steps procedure was developed to translate\nbiological parameters from the AdEx model to hardware parameters. The translation procedure is\nsummarized in \ufb01gure 5:\n\nFigure 5: Biology to hardware parameter translation\n\nThe \ufb01rst step is to scale the biological AdEx parameters in terms of time and voltage. At this stage,\nthe desired time acceleration factor is chosen, and applied to the two time constants of the model.\nThen, a voltage scaling factor is de\ufb01ned, by which the biological voltages parameters are multiplied.\nThis factor has to be high enough to improve the signal-to-noise ratio in the hardware, but not too\nhigh to stay in the operating range of the OTAs.\nThe second step is to translate the parameters from the scaled AdEx model to hardware parameters.\nFor this purpose, each part of the DenMem circuit was characterized in transistor-level simulations\nusing a circuit simulator. This theoretical characterization was then used to establish mathematical\nrelations between scaled AdEx parameters and hardware parameters.\n\n5\n\nBiological AdExparametersScaled AdExparametersHardwareparametersScalingTranslation\f2.4 Measurement capabilities\n\nFor neuron measuring purposes, the membrane can be either stimulated by incoming events from\nthe synapse array - as an additional feature a Poisson event source is implemented on the chip\n- or by a programmable current. This current can be programmed up to few \u00b5A replaying 129 10 bit\nvalues using a sequencer and a digital-to-analog converter. Four current sources are implemented\non the chip allowing to stimulate adjacent neurons individually. Currently, the maximum period of\na current stimulus is limited to 33 \u00b5s, but this can be easily enhanced as the HICANN host interface\nallows an update of the value storage in real time.\nThe membrane voltage and all stored parameters in the \ufb02oating gates can directly be measured via\none of the two analog outputs of the HICANN chip. Membrane voltages of two arbitrary neurons\ncan be read out at the same time.\nTo characterize the chip, parameters like the membrane capacitance need to be measured indirectly\nusing the OTA, emulating gl, as a current source example.\n\n3 Results\n\nDifferent \ufb01ring patterns have been reproduced using our hardware neuron and the current stimulus in\ncircuit simulation and in silicon, inducing a periodic step current onto the membrane. The examined\nneuron consists of two DenMem circuits with their membrane capacitances switched to 2 pF each.\nFigure 6 shows results of some reproduced patterns according to [23] or [16] neighbored by their\nphase plane trajectory of V and Vadapt. As the simulation describes an electronic circuit, the tra-\njectories are continuous. All graphs have been recorded injecting a step current of 600 nA onto the\nmembrane. gadapt and gl have been chosen equal in all simulations except tonic spiking to facilitate\nthe nullclines:\n\nVadapt = \u2212 gl\na\nVadapt = V ;\n\n(V \u2212 El) +\n\ngl\na\n\n\u2206T e\n\n(cid:16) V \u2212VT\n\n\u2206T\n\n(cid:17)\n\n+ El +\n\nI\na\n\n(9)\n\n(10)\n\nAs described in [16], the AdEx model allows different types of spike after potentials (SAP). Sharp\nSAPs are reached if the reset after a spike sets the trajectory to a point, below the V-nullcline. If\nreset ends in a point, above the V-nullcline, the membrane voltage will be pulled down below the\nreset voltage Vreset by the adaptation current.\nThe \ufb01rst pattern - tonic spiking with a sharp reset - can be reached by either setting b to a small\nvalue and shrinking the adaptation time constant to make Vadapt follow V very fast - at least, the\nadaptation constant must be small enough to enable Vadapt to regenerate b in the inter-spike interval\n(ISI)- or by setting a to zero. Here, a has been set to zero, while gl has been doubled to keep the total\nconductance at a similar level. Parameters between simulation and measurement are only roughly\nmapped, as the precise mapping algorithm is still in progress - on a real chip there is a variation of\ntransistor parameters which still needs to be counterbalanced by parameter choice.\nSpike-frequency adaptation is caused by enlarging Vadapt at each detected spike, while still staying\nbelow the V-nullcline (equation 9). As metric, for adaptation [24] and [16] use the accommodation\nindex:\n\nA =\n\n1\n\nN \u2212 k \u2212 1\n\nISIi \u2212 ISIi\u22121\nISIi + ISIi\u22121\n\n(11)\n\nN(cid:88)\n\ni=k\n\nHere k determines the number of ISI excluded from A to exclude transient behavior [15, 24] and\ncan be chosen as one \ufb01fth for small numbers of ISIs [24]. The metric calculates the average of the\ndifference between two neighbored ISIs weighted by their sum, so it should be zero for ideal tonic\nspiking. For our results we get an accommodation index of 0 \u00b1 0.0003 for fast spiking neurons in\nsimulation and \u22120.0004\u00b10.001 in measurement. For adaptation the values are 0.1256\u00b10.0002 and\n\n6\n\n\f(a) Tonic spiking\n\n(b) Spike frequency adaptation\n\n(c) Phasic burst\n\n(d) Tonic burst\n\nFigure 6: Phase plane and transient plot from simulations and measurement results of the neuron\nstimulated by a step current of 600 nA.\n\n0.039 \u00b1 0.001. As parameters have been chosen to reproduce the patterns obviously (adaptation is\nswitched of for tonic spiking and strong for spike frequency adaptation) they are a little bit extreme\nin comparison to the calculated ones in [24] which are 0.0045\u00b1 0.0023 for fast spiking interneurons\nand 0.017 \u00b1 0.004 for adapting neurons.\nIt is ambiguous to de\ufb01ne a burst looking just at the spike frequency. We follow the de\ufb01nition used\nin [16] and de\ufb01ne a burst as one or more sharp resets followed by a broad reset. The bursting results\ncan be found in \ufb01gure 6, too. To generate bursting behavior, the reset has to be set to a value above\nthe exponential threshold so that V is pulled upwards by the exponential directly after a spike.\nAs can be seen in \ufb01gure 1, depending on the sharpness \u2206t of the exponential term, the exact reset\nvoltage Vr might be critical in bursting, when reseting above the exponential threshold and the\nnullcline is already steep at this point. The AdEx model is capable of irregular spiking in contrast\nto the Izhikevich neuron [25] which uses a quadratic term to simulate the rise at a spike. The\n\n7\n\n 0.87 0.88 0.89 0.9 0.91 0.92 0.93 0.94 0.85 0.9 0.95 1 1.05 1.1 1.15Vadapt[V]V[V] 0.85 0.9 0.95 1 1.05 1.1 1.15 0 5 10 15 20 25 30V[V]Time[\u00b5s] 0.85 0.9 0.95 1 1.05 1.1 1.15 1.2 0 5 10 15 20 25 30V[V]Time[\u00b5s] 0.87 0.88 0.89 0.9 0.91 0.92 0.93 0.94 0.95 0.96 0.8 0.85 0.9 0.95 1 1.05 1.1 1.15Vadapt[V]V[V] 0.8 0.85 0.9 0.95 1 1.05 1.1 1.15 0 5 10 15 20 25 30V[V]Time[\u00b5s] 0.75 0.8 0.85 0.9 0.95 1 1.05 1.1 1.15 1.2 0 5 10 15 20 25 30V[V]Time[\u00b5s] 0.84 0.86 0.88 0.9 0.92 0.94 0.96 0.98 0.8 0.85 0.9 0.95 1 1.05 1.1 1.15Vadapt[V]V[V] 0.8 0.85 0.9 0.95 1 1.05 1.1 1.15 0 5 10 15 20 25 30V[V]Time[\u00b5s] 0.75 0.8 0.85 0.9 0.95 1 1.05 1.1 1.15 1.2 0 5 10 15 20 25 30V[V]Time[\u00b5s] 0.84 0.86 0.88 0.9 0.92 0.94 0.96 0.98 1 0.75 0.8 0.85 0.9 0.95 1 1.05 1.1 1.15Vadapt[V]V[V] 0.75 0.8 0.85 0.9 0.95 1 1.05 1.1 1.15 0 5 10 15 20 25 30V[V]Time[\u00b5s] 0.75 0.8 0.85 0.9 0.95 1 1.05 1.1 1.15 1.2 0 5 10 15 20 25 30V[V]Time[\u00b5s]\fchaotic spiking capability of the AdEx model has been shown in [16]. In Hardware, we observe\nthat it is common to reach regimes, where the exact number of spikes in a burst is not constant, thus\nthe distance to the next spike or burst may differ in the next period. Another effect is that if the\nequilibrium potential - the potential, where the nullclines cross - is near Vt, noise may cause the\nmembrane to cross Vt and hence generate a spike (Compare phase planes in \ufb01gure 6 c) and d) ).\nFigure 6 shows tonic bursting and phasic bursting. In phasic bursting, the nullclines are still crossing\nin a stable \ufb01x point - the resting potential caused by adaptation, leakage and stimulus is below the\n\ufb01ring threshold of the exponential.\nPatterns reproduced in experiment and simulations but not shown here are phasic spiking and initial\nbursting.\n\n4 Discussion\n\nThe main feature of our neuron is the capability of directly reproducing the AdEx model. It is neither\noptimized to be low power nor small in size in contrast to postulations by Livi in [6]. Nevertheless,\nit is low power in comparison to simulation on a supercomputer (estimated 100 \u00b5W in comparison\nto 370 mW on a Blue Gene/P [26] at an acceleration factor of 104, computing time of Izhikevich\nneuron model [23] used as estimate.) and does not consume much chip area in comparison to the\nsynapse array and communication infrastructure on the HICANN (\ufb01gure 2). Complex indi-\nvidual parameterization allows adaptation onto different models. As our model is working on an\naccelerated time scale of up to 105 times faster than biological real time, it is neither possible nor\nwanted to interact with systems relying on biological real time. Instead, by scaling the system up to\nabout a million neurons, it will be possible to do experiments which have never been feasible so far\ndue to the long duration of numerical simulations at this scale, i.e. allowing large parameter sweeps,\ndense real-world stimuli as well as many repetitions of experiments for gaining statistics.\nDue to the design approach - implementing an established model instead of developing a new model\n\ufb01tting best to hardware devices - we gain a neuron allowing neuroscientist to do experiments without\nbeing a hardware specialist.\n\n5 Outlook\n\nThe neuron topology - several DenMems are interconnected to form a neuron - is predestined to be\nenhanced to a multi-compartment model. This will be the next design step.\nThe simulations and measurements in this work qualitatively reproduce patterns observed in biology\nand reproduced by the AdEx model in [16]. A method to directly map the parameters of the AdEx\nquantitatively to the simulations has already been developed. This method needs to be enhanced to\na mapping onto the real hardware, counterbalancing mismatch and accounting for limited parameter\nresolution.\nNested in the FACETS wafer-scale system, our neuron will complete the universality of the system\nby a versatile core for analog computation. Encapsulation of the parameter mapping into low level\nsoftware and PyNN [12] integration of the system will allow computational neural scientists to do\nexperiments on the hardware and compare them to simulations, or to do large experiments, currently\nnot implementable in a simulation.\n\nAcknowledgments\n\nThis work is supported in part by the European Union under the grant no.\n(FACETS).\n\nIST-2005-15879\n\nReferences\n\n[1] Carver A. Mead and M. A. Mahowald. A silicon model of early visual processing. Neural Networks,\n\n1(1):91\u201397, 1988.\n\n8\n\n\f[2] C. A. Mead. Analog VLSI and Neural Systems. Addison Wesley, Reading, MA, 1989.\n[3] Misha Mahowald and Rodney Douglas. A silicon neuron. Nature, 354(6354):515\u2013518, Dec 1991.\n[4] E. Farquhar and P. Hasler. A bio-physically inspired silicon neuron. Circuits and Systems I: Regular\n\nPapers, IEEE Transactions on, 52(3):477 \u2013 488, march 2005.\n\n[5] J.V. Arthur and K. Boahen. Silicon neurons that inhibit to synchronize. In Circuits and Systems, 2007.\n\nISCAS 2007. IEEE International Symposium on, pages 1186 \u20131186, 27-30 2007.\n\n[6] P. Livi and G. Indiveri. A current-mode conductance-based silicon neuron for address-event neuromorphic\nsystems. In Circuits and Systems, 2009. ISCAS 2009. IEEE International Symposium on, pages 2898 \u2013\n2901, 24-27 2009.\n\n[7] Jayawan H.B. Wijekoon and Piotr Dudek. Compact silicon neuron circuit with spiking and bursting\nbehaviour. Neural Networks, 21(2-3):524 \u2013 534, 2008. Advances in Neural Networks Research: IJCNN\n\u201907, 2007 International Joint Conference on Neural Networks IJCNN \u201907.\n\n[8] J. Schemmel, A. Gr\u00a8ubl, K. Meier, and E. Muller. Implementing synaptic plasticity in a VLSI spiking\nneural network model. In Proceedings of the 2006 International Joint Conference on Neural Networks\n(IJCNN). IEEE Press, 2006.\n\n[9] J. Schemmel, D. Br\u00a8uderle, K. Meier, and B. Ostendorf. Modeling synaptic plasticity within networks of\nhighly accelerated I&F neurons. In Proceedings of the 2007 IEEE International Symposium on Circuits\nand Systems (ISCAS), pages 3367\u20133370. IEEE Press, 2007.\n\n[10] Alain Destexhe. Conductance-based integrate-and-\ufb01re models. Neural Comput., 9(3):503\u2013514, 1997.\n[11] Daniel Br\u00a8uderle, Eric M\u00a8uller, Andrew Davison, Eilif Muller, Johannes Schemmel, and Karlheinz Meier.\nEstablishing a novel modeling tool: A python-based interface for a neuromorphic hardware system. Front.\nNeuroinform., 3(17), 2009.\n\n[12] A. P. Davison, D. Br\u00a8uderle, J. Eppler, J. Kremkow, E. Muller, D. Pecevski, L. Perrinet, and P. Yger. PyNN:\n\na common interface for neuronal network simulators. Front. Neuroinform., 2(11), 2008.\n\n[13] R. Brette and W. Gerstner. Adaptive exponential integrate-and-\ufb01re model as an effective description of\n\nneuronal activity. J. Neurophysiol., 94:3637 \u2013 3642, 2005.\n\n[14] FACETS. Fast Analog Computing with Emergent Transient States \u2013 project website. http://www.\n\nfacets-project.org, 2010.\n\n[15] Henry Markram, Maria Toledo-Rodriguez, Yun Wang, Anirudh Gupta, Gilad Silberberg, and Caizhi Wu.\n\nInterneurons of the neocortical inhibitory system. Nat Rev Neurosci, 5(10):793\u2013807, Oct 2004.\n\n[16] Richard Naud, Nicolas Marcille, Claudia Clopath, and Wulfram Gerstner. Firing patterns in the adaptive\n\nexponential integrate-and-\ufb01re model. Biological Cybernetics, 99(4):335\u2013347, Nov 2008.\n\n[17] J. Schemmel, J. Fieres, and K. Meier. Wafer-scale integration of analog neural networks. In Proceedings\n\nof the 2008 International Joint Conference on Neural Networks (IJCNN), 2008.\n\n[18] J. Fieres, J. Schemmel, and K. Meier. Realizing biological spiking network models in a con\ufb01gurable\nwafer-scale hardware system. In Proceedings of the 2008 International Joint Conference on Neural Net-\nworks (IJCNN), 2008.\n\n[19] J. Schemmel, D. Br\u00a8uderle, A. Gr\u00a8ubl, M. Hock, K. Meier, and S. Millner. A wafer-scale neuromorphic\nhardware system for large-scale neural modeling. In Proceedings of the 2010 IEEE International Sympo-\nsium on Circuits and Systems (ISCAS), pages 1947\u20131950, 2010.\n\n[20] T.S. Lande, H. Ranjbar, M. Ismail, and Y. Berg. An analog \ufb02oating-gate memory in a standard digital tech-\nnology. In Microelectronics for Neural Networks, 1996., Proceedings of Fifth International Conference\non, pages 271 \u2013276, 12-14 1996.\n\n[21] Alain Destexhe, Diego Contreras, and Mircea Steriade. Mechanisms underlying the synchronizing ac-\ntion of corticothalamic feedback through inhibition of thalamic relay cells. Journal of Neurophysiology,\n79:999\u20131016, 1998.\n\n[22] Martin Pospischil, Maria Toledo-Rodriguez, Cyril Monier, Zuzanna Piwkowska, Thierry Bal, Yves\nFr\u00b4egnac, Henry Markram, and Alain Destexhe. Minimal hodgkin\u2013huxley type models for different classes\nof cortical and thalamic neurons. Biological Cybernetics, 99(4):427\u2013441, Nov 2008.\n\n[23] Eugene M. Izhikevich. Which Model to Use for Cortical Spiking Neurons? IEEE Transactions on Neural\n\nNetworks, 15:1063\u20131070, 2004.\n\n[24] Shaul Druckmann, Yoav Banitt, Albert Gidon, Felix Schrmann, Henry Markram, and Idan Segev. A\nnovel multiple objective optimization framework for constraining conductance-based neuron models by\nexperimental data. Front Neurosci, 1(1):7\u201318, Nov 2007.\n\n[25] Eugene M. Izhikevich. Simple Model of Spiking Neurons.\n\n14:1569\u20131572, 2003.\n\nIEEE Transactions on Neural Networks,\n\n[26] IBM. System blue gene solution. ibm.com/systems/deepcomputing/bluegene/, 2010.\n\n9\n\n\f", "award": [], "sourceid": 425, "authors": [{"given_name": "Sebastian", "family_name": "Millner", "institution": null}, {"given_name": "Andreas", "family_name": "Gr\u00fcbl", "institution": null}, {"given_name": "Karlheinz", "family_name": "Meier", "institution": null}, {"given_name": "Johannes", "family_name": "Schemmel", "institution": null}, {"given_name": "Marc-olivier", "family_name": "Schwartz", "institution": null}]}