{"title": "A Low-Power CMOS Circuit Which Emulates Temporal Electrical Properties of Neurons", "book": "Advances in Neural Information Processing Systems", "page_first": 678, "page_last": 686, "abstract": null, "full_text": "678 \n\nA LOW-POWER CMOS CIRCUIT WHICH EMULATES \nTEMWORALELECTIDCALPROPERTIES OF NEURONS \n\nJack L. Meador and Clint S. Cole \n\nElectrical and Computer Engineering Dept. \n\nWashington State University \n\nPullman WA. 99164-2752 \n\nABSTRACf \n\nThis paper describes a CMOS artificial neuron. The circuit is \ndirectly derived from the voltage-gated channel model of neural \nmembrane, has low power dissipation, and small layout geometry. \nThe principal motivations behind this work include a desire for high \nperformance, more accurate neuron emulation, and the need for \nhigher density in practical neural network implementations. \n\nINTRODUCTION \n\nPopular neuron models are based upon some statistical measure of known natural \nbehavior. Whether that measure is expressed in terms of average firing rate or a \nfiring probability, the instantaneous neuron activation is only represented in an \nabstract sense. Artificial electronic neurons derived from these models represent this \nexcitation level as a binary code or a continuous voltage at the output of a summing \namplifier. While such models have been shown to perform well for many applica(cid:173)\ntions, and form an integral part of much current work, they only partially emulate the \nmanner in which natural neural networks operate. They ignore, for example, \ndifferences in relative arrival times of neighboring action potentials -- an important \ncharacteristic known to exist in natural auditory and visual networks {Sejnowski, \n1986}. They are also less adaptable to fme-grained, neuron-centered learning, like \nthe post-tetanic facilitation observed in natural neurons. We are investigating the \nimplementation and application of neuron circuits which better approximate natural \nneuron function. \n\nBACKGROUND \n\nThe major temporal artifacts associated with natural neuron function include the \nspacio-temporal integration of synaptic activity, the generation of an action potential \n(AP), and the post-AP hyperpolarization (refractory) period (Figure 1). Integration, \nmanifested as a gradual membrane depolarization, occurs when the neuron accumu(cid:173)\nlates sodium ions which migrate through pores in its cellular membrane. The rate of \nion migration is related to the level of presynaptic AP bombardment, and is also \nknown to be a non-linear function of transmembrane potential. Efferent AP genera(cid:173)\ntion occurs when the voltage-sensitive membrane of the axosomal hillock reaches \nsome threshold potential whereupon a rapid increase in sodium permeability leads to \n\n\fA Low-Power CMOS Circuit Which Emulates Neurons \n\n679 \n\ncomplete depolarization. Immediately thereafter, sodium pores \"close\" simultaneously \nwith increased potassium permeability, thereby repolarizing the membrane toward its \nresting potential. The high potassium permeability during AP generation leads to the \ntransient post-AP hyperpolarization state known as the refractory period. \n\nv \n\nActl vat Ion \nThreshold \n\n( 1) \n\n( 3) \n\nFigure 1. Temporal artifacts associated with neuron function. (1) gradual \ndepolarization, (2) AP generation, (3) refractory period. \n\nSeveral analytic and electronic neural models have been proposed which embody \nthese characteristics at varying levels of detail. These neuromimes have been used to \ngood advantage in studying neuron behavior. However, with the advent of artificial \nneural networks (ANN) for computing, emphasis has switched from modeling neu(cid:173)\nrons for physiologic studies to developing practical neural network implementations. \nAs the desire for high performance ANNs grows, models amenable to hardware \nimplementation become more attractive. \nThe general idea behind electronic neuromimes is not new. Beginning in 1937 with \nwork by Harmon {Harmon, 1937},lIectronic circuits have been used to model and \nstudy neuronal behavior. In the late 196(Ys, Lewis {Lewis, 1968} developed a circuit \nwhich simulated the Hodgkin-Huxley model for a single neuron, followed by \nMacQregor's circuit {MacGregor, 1973} in the early 1970's which modelled a group \nof 50 neurons. With the availability of VLSI in the 1980's, electronic neural imple(cid:173)\nmentations have largely moved to the realm of integrated circuits. Two different stra(cid:173)\ntegies have been documented: analog implementations employing operational \namplifiers {Graf, et at, 1987,1988; Sivilotti, et at, 1986; Raffel, 1988; Schwartz, et al, \n1988}; and digital implementations such as systolic arrays {Kung, 1988}. \nMore recently, impulse neural implementations are receiving increased attention. \nlike other models, these neuromimes generate outputs based on some non-linear \nfunction of the weighted net inputs. However, interneuron communication is realized \nthrough impulse streams rather than continuous voltages or binary numbers {Murray, \n1988; N. El-Leithy, 1987}. Impulse networks communicate neuron activation as vari(cid:173)\nable pulse repetition rates. The impulse neuron circuits which shall be discussed offer \nboth small geometry and low power dissipation as well as a closer approximation to \nnatural neuron function. \n\n\f680 \n\nMeador and Cole \n\nA CMOS IMPULSE NEURON \n\n-\n\nAn impulse neuron circuit developed for use in CMOS neural networks is shown in \nFigure 2. In this circuit, membrane ion current is modeled by charge flowing to and \nfrom Ca. Potassium and sodium influx is represented by current flow from V dd to the \ncapacitor, and ion efflux by flow from the capacitor to ground. The Field Effect(cid:173)\nTransistors (FETs) connected between V dd, Vsr, and the capacitor emulate voltage(cid:173)\nand chemically-gated ion channels found in natural neural membrane. In the Figure, \nPET 1 corresponds to the post-synaptic chemicaIly-gated ion channels associated with \none synapse. PETs 2, 3, and 4 emulate the voltage-gated channels distributed \nthroughout a neuron membrane. The following equations summarize circuit opera(cid:173)\ntion: \n\nCa dVa/dt=/31E (Vr,Va)+/3:uF(Va)-/34G (Va) \nE(Vr,Va) = (Vr-Va-V\",)(V dd-Va)-(V dd-Va)2 /2 \nF(Y. ) ={(V dd-Vtp) (Va-V dd)-(Va-V dd)2 /2 \n\n0 \n\nif g(t) ~O \notherwlSe \n\na \n\na \n\nG(V)={(Vdd-V\",)Va-Va2/2 \n\n0 \n\nif h(t~)=O \notherwlSe \n\ng(t) =h (t)(l-h (t-C)) \n\nh()-\n\n10 if Va(t) > Vth; \n\nt - 1 if Va(t)0 \notherwise \n\ne \n\n(10) \n\n(11) \n\nThe use of these equations yields a back-propagation algorithm for impulse networks \nwhich does not perform true gradient descent, yet which so far has been observed to \nlearn solutions to logic problems such as XOR and the 4-2-4 encoder. Investigation \nof other offline learning algorithms for impulse networks continues. Currently, this \nalgorithm fulfills the immediate need for an offline procedure which can be used in \nthe design of multi-layer impulse neural networks. \n\nIMPLEMENTATION \n\nTwo requirements for high density integration are low power dissipation and small \ncircuit geometry. CMOS impulse neurons use switching circuits having no continuous \npower dissipation. A conventional op-amp circuit must draw constant current to \nachieve linear bias. An op-amp also requires larger circuit geometries for gain accu(cid:173)\nracy over typical fabrication process variations. Such is not the case for nonlinear \nswitching circuits. As a result, these neurons and others like them are expected to \nhelp improve analog neural network integration density. \nAn impulse neuron circuit has been designed which eliminates FETs 2 and 3 of Figure \n2 in exchange for reduced layout area. In this circuit, Va no longer exhibits an activa(cid:173)\ntion potential spike. This spike seems irrelevant given the buffered impulse available \nat the axon output. The modified neuron circuit occupies 200 X 25 lambda chip area. \nA fIXed PET -synapse occupies a 16 by 18 lambda rectangle. With these dimensions a \nfull-interconnect layout containing 40 neurons and 1600 fIXed connections will fit on a \nMOSIS 2-micron tiny chip. XOR and 4-2-4 networks of these circuits are being \ndeveloped for 2-micron CMOS. \n\nCONCLUSION \n\nThe motivation of this work is to improve neural network implementation technology \nby designing CMOS circuits derived from the temporal characteristics of natural neu(cid:173)\nrons. The results obtained thus far include: \n\nTwo CMOS circuits which closely correspond to the voltage-gat ed-channel \nmodel of natural neural membrane. \n\n\fA Low-Power CMOS Circuit Which Emulates Neurons \n\n685 \n\nSimulations which show that these impulse neurons emulate gross artifacts of \nnatural neuron function. \n\nInitial work on a back-propagation algorithm which learns logic solutions using \nthe impulse neuron activation function. \nThe development of prototype impulse network I.Cs. \n\nFuture goals involve extending this investigation to plastic synapse and neuron cir(cid:173)\ncuits, alternate algorithms for both offline and online learning, and practical imple(cid:173)\nmentations. \nRererences \nH. P. Graf W. Hubbard L. D. Jackel P. G. N. deVegvar. A CMOS associative \nMemory Chip. IEEE ICNN Con. Proc., pp. 461-468, (1987). \nH. P. Graf L. D. Jackel W. E. Hubbard. VLSI Implementation of a Neural Network \nModel. IEEE Computer, pp. 41-49, (1988). \nA.C. Guyton. Chapt. 10. Organization of the Nervous System: Basic Functions of \nSynapses. Textbook o[ Physiology, p.l36. (1986) \nN. EI-Liethy, R.W. Newcomb, M. Zaghlou. A Basic MOS Neural-Type Junction A \nPerspective on Neural-Type Microsystems. IEEE ICNN Con. Proc., pp. 469-477, \n(1987). \nE. R. Lewis. Using Electronic Circuits to Model Simple Neuroelectric Interactions. \nProc. IEEE 56, pp. 931-949, (1968). \nR. J. MacGregor R. M. Oliver. A General-Purpose Electronic Model for Arbitrary \nConfigurations of Neurons. 1. Theor. Bioi. 38, pp. 527-538 (1973). \nS. Y. Kung. and J. N. Hwang. Parallel Achitectures for Artificial Neural Nets. IEEE \nICNN Con. Proc., pp. 11-165 to 11-172, (1988). \nJ. Mann R. Lippman B. Berger J. Raffel. A Self-Organizing Neural Net Chip. IEEE \nCust.Integr. Ckts. Conf\" pp. 10.3.1-10.3.5 (1988). \nA. F. Murray A. V. W. Smith. Asynchronous VLSI Neural Networks Using Pulse(cid:173)\nStream Arithmetic. IEEE lnl. of Sol. St. Phys. 23, pp. 688-697, (1988). \nJ. I. Raffel. Electronic Implementation of Neuromorphic Systems. IEEE Cust. Integr. \nCkts. Conf\" pp. 10.1.1-10.1.7, (1988). \nD. Rumelhart, G.E. Hinton, and RJ. Williams. Learning Internal Representations by \nError Propagation. Parallel Distributed Processing, Vol 1: Foundations, pp. 318-364, \n(1986). \nO. H. Schmitt. Mechanical Solution of the Equations of Nerve Impulse Propagation. \nAm. 1. Physiol. 119, pp. 399-400, (1937). \nD. B. Schwartz R. E. Howard. A Programmable Analog Neural Network Chip. IEEE \nCust. Integr. Ckts. Conf., pp. 10.2.1-1.2.4, (1988). \n\n\f686 \n\nMeador and Cole \n\nT J. Sejnowski. Open Questions About Computation in Cerebral Cortex. Parallel \nDistributed Processing Vol. 2:Psychological and Biological Models, pp. 378-385, (1986). \nM. A. Sivilotti M. R. Emerling C. A. Mead. VISI Architectures for Implementation \nof Neural Networks. Am. Ins. of Phys., 408-413, (1986). \n\n\f", "award": [], "sourceid": 172, "authors": [{"given_name": "Jack", "family_name": "Meador", "institution": null}, {"given_name": "Clint", "family_name": "Cole", "institution": null}]}