{"title": "Digital-Analog Hybrid Synapse Chips for Electronic Neural Networks", "book": "Advances in Neural Information Processing Systems", "page_first": 769, "page_last": 776, "abstract": null, "full_text": "Digital-Analog Hybrid Synapse Chips for Electronic Neural Networks \n\n769 \n\nDigital-Analog Hybrid Synapse Chips for \n\nElectronic Neural Networks \n\nA  Moopenn,  T.  Duong,  and AP. Thakoor \n\nCenter for  Space  Microelectronics  Technology \n\nJet  Propulsion  Laboratory/California  Institute  of Technology \n\nPasadena,  CA 91109 \n\nABSTRACf \n\nCascadable,  CMOS  synapse  chips  containing  a  cross-bar  array  of \n32x32  (1024)  programmable  synapses  have  been  fabricated  as \n\"building  blocks\"  for  fully  parallel  implementation  of  neural \nnetworks.  The  synapses  are  based  on  a  hybrid  digital-analog \ndesign  which  utilizes  on-Chip  7-bit data  latches  to store quantized \nweights and two-quadrant multiplying DAC's to compute weighted \noutputs.  The  synapses  exhibit  6-bit  resolution  and  excellent \nmonotonicity  and  consistency  in  their  transfer  characteristics.  A \n64-neuron  hardware  incorporating  four  synapse  chips  has  been \nfabricated  to  investigate the performance of feedback  networks  in \noptimization  problem  solving. \nIn  this  study,  a  7x7,  one-to-one \nassignment  net  and  the  Hop field-Tank  8-city  traveling  salesman \nproblem  net  have  been  implemented  in  the  hardware.  The \nnetwork's ability to obtain optimum or near optimum solutions in \nreal  time  has  been  demonstrated. \n\n1  INTRODUCTION \nA  large  number  of  electrically  modifiable  synapses  is  often  required  for  fully \nparallel  analog  neural  network  hardware.  Electronic  synapses  based  on  CMOS, \nEEPROM,  as  well  as  thin  film  technologies  are  actively  being  developed  [1-5]. \nOne preferred approach is  based on a hybrid digital-analog design which can easily \nbe implemented  in  CMOS  with  simple interface and  analog  circuitry.  The hybrid \ndesign  utilizes  digital  memories to store the synaptic weights  and  digital-to-analog \nconverters  to  perform analog multiplication.  A variety of synaptiC  chips  based  on \nsuch  hybrid  designs  have  been  developed  and  used  as  \"building  blocks\"  in  larger \nneural  network  hardware  systems  fabricated  at JPL. \n\nIn  this paper, the design and operational characteristics of the hybrid synapse chips \nare described.  The development of a 64-neuron  hardware incorporating several of \n\n\f770  Moopenn, Duong and Thakoor \n\nthe  synapse  chips  is  also  discussed.  Finally,  a  hardware  implementation  of two \nglobal  optimization nets,  namely,  the one-to-one assignment  optimization net and \nthe Hopfield-Tank traveling salesman net [6],  and  their performance based  on our \n64-neuron  hardware  are  discussed. \n\n2 CHIP DESIGN AND  ELECfRICAL CHARACfERISTICS \nThe basic design and operational characteristics of the hybrid digital analog synapse \nchips  are  described  in  this  section.  A  simplified  block  diagram  of the  chips  is \nshown  in  Fig.  1.  The  chips  consist  of an  address/data  de-multiplexer,  row  and \ncolumn  address  decoders,  64  analog  input/output  lines,  and  1024  synapse  cells \narranged  in  the  form  of a  32x32  cross-bar  matrix.  The synapse  cells  along  the  i(cid:173)\nth  row  have  a  common  output,  xi'  and  similarly,  synapses  along  the j-th  column \nhave a common input, yj'  The synapse input/output lines are brought off-chip  for \nmulti-chip  expansion  to  a  larger  synaptic  matrix.  The  synapse  cell,  based  on  a \nhybrid  digital  analog design,  essentially  consists  of a  7-bit  static  latch  and  a 7-bit, \ntwo-quadrant  multiplying  DAC. \n\nfROM  NEURON  OUTPUTS \n\nYO \n\nY31 \n\nROWICOL \nADDRESS. \nDATA \n\nKO \n\nX31 \n\nFigure 1:  Simplified  block  diagram  of hybrid  32x32x7-bit  synapse  chip. \n\nA  circuit  diagram  of the 7-bit  DAC  is  shown  in  Fig.  2.  The  DAC  consists  of a \ncurrent  input  circuit,  a  set  of  binary  weighted  current  sources,  and  a  current \nsteering circuit.  The current in  the input circuit is  mirrored by  the binary-weighted \ncurrent  sources  for  all  synapses  along  a  column. \nIn  one  version  of the  chips,  a \nsingle long-channel  PET is  used  to  convert the synapse input voltage to  a current. \nIn  addition,  the gate of the transistor is  connected internally  to  the gates  of other \nlong channel  transistors.  This  common  gate  is  accessable off-chip  and  provides  a \nmeans  for  controlling  the overall  \"gain\"  of the synapses  in  the  chip.  In  a  second \nchip  version,  an external  resistor  is  employed  to  perform  input voltage  to  current \nconversion  when  a  high  linearity  in  the synapse  transfer  characteristics  is  desired. \n\n\fDigital-Analog Hybrid Synapse Chips for Electronic Neural Networks \n\n771 \n\nHybrid  32x32x7 -bit  synapse  chips  with  and  without  long  channel  transistors  were \nfabricated  through  MOSIS  using  a  2-micron,  n-well  CMOS  process.  Typical \nmeasured synapse response (I-V) curves from  these chips are shown in Figs.  3a and \n3b  for  weight values  of 0,  +/- 1,  3,  7,  15,  31,  and 63.  The curves  in  Fig.  3a were \nobtained for  a synapse incorporating an on-chip long-channel FET with a gate bias \nof 5  volts.  The  non-linear  synapse  response  is  evident  and  can  be  seen  to  be \nsimilar  to  that of a \"threshold\"  current source.  The non-linear behavior is  mainly \nattributed  to  the nonlinear drain  characteristics  of the long channel  transistor.  It \nshould  be  pointed  out that synapses with  such  characteristics  are especially suited \nfor  neural  networks  with  neurons  operating  in  the  high  gain  limit,  in  which  case, \nthe nonlinearity may even be desirable.  The set of curves in  Fig.  3b were obtained \nusing an externall0-megaohm resistor for  the V-I  conversion.  For input voltages \ngreater than about  twice  the transistor's  threshold voltage  (- 0.8 v),  the synapse's \ncurrent  output  is  a  highly  linear  function  of  the  input  VOltage.  The  linear \ncharacteristics  achieved  with  the  use  of external  resistors  would  be  applicable  in \nfeedforward  nets with  learning  capabilities. \n\nVgg--1 \n\nv,, \n\nFigure  2:  Circuit  diagram  of 7-bit  multiplying  DAC. \n\nFigure 4 shows  the  measured  output  of the synapse  as  the weight  is  incremented \nfrom  -60  to  +60.  The  synapse  exhibits  excellent  monotonicity  and  step  size \nconsistency.  Based  on a random sampling of synapses from  several chips,  the step \nsize standard deviation due to mismatched  transistor characteristics is  typically less \nthan  25  percent. \n\n3  64-NEURON HARDWARE \nThe  hybrid  synapse  chips  are  ideally  suited  for  hardware  implementation  of \nfeedback  neural  networks  for  combinatorial  global  optimization  problem  solving \nor associative  recall where  the synaptic weights  are known  a  priori.  For example, \nin  a  Hopfield-type  feedback  net  [7],  the weights  can  be  calculated  directly  from  a \nset  of  cost  parameters  or  a  set  of  stored  vectors.  The  desired  weights  are \n\n\f772  Moopenn, Duong and Thakoor \n\nquantized and  downloaded  into  the memories  of the synapse chips.  On the other \nhand, in supervised learning applications, learning can be performed off-line, taking \ninto  consideration  the  operating  characteristics  of  the  synapses,  and  the  new \nupdated weights  are simply reprogrammed  into the synaptic hardware during each \ntraining  cycle. \n\n(a) \n\n100  ,.-,------.------,----r-----.--,---, \n\n(b) \n15 \n\n';i) \nQI \nL \nQI \na. \nE \n0 \n0 \nL \nU \n\n.!5 \nt-\n::J \n[L \nt-\n::J \n0 \nt-\nz \nl.LJ \n0:: \n0:: \n::J u \n\n50 \n\no \n\n-50 \n\n-100 \n\n0 \n\n';i)  10 \nQI \nL \nQI a. \nE \n0 \n0 \n\n5 \n\nb \n.\u00a7 \nt- o \n\n-5 \n\n::J \n[L \nt-\n::J \n0 \nt-z \nl.LJ \n0:: \n~-IO \nu \n\n2 \nVOLTAGE \n\n6 \n\nB \n4 \nINPUT  [volts) \n\n10 \n\n-15 \n\n0 \n\n2 \nVOLTAGE \n\n6 \n\nB \n4 \nINPUT  [volts) \n\n10 \n\nFigure 3:  Transfer characteristics  of a  7 -bit synapse for  weight values of 0,  + /- 1, \n3,  7,  15,  31, 63,  (a)  with  long channel transistors for voltage to current conversion \n(Vgg=  5.0  volts)  and  (b)  with  external  10  mega-ohm  resistor. \n\n75 \n\n50 \n\n[fi \n[L  25 \n~ < \n0 \n0:: \nU \n3 \nt-\n::J e:  -25 \n\n0 \n\n::J \n0 \n\n/ \n\n-50 \n\n-75 \n\n-75 \n\n-50 \n\n-25 \n\no \n\nWEIGHT  VALUES \n\n25 \n\n50 \n\n75 \n\nFigure  4:  Synapse  output  as  weight  value  is  incremented  from  -60  to  +60. \n(V gg=Vin=  5.0  volts) \n\n\fDigital-Analog Hybrid Synapse Chips for Electronic Neural Networks \n\n773 \n\nA 64-neuron  breadboard system  incorporating several  of the hybrid  synapse chips \nhas  been fabricated  to  demonstrate the utility of these building block chips, and  to \ninvestigate the dynamical properties, global optimization  problem solving abilities, \nand  application  potential  of neural  networks.  The system  consists  of an  array  of \n64 discrete neurons and four hybrid synapse chips connected to form  a 64x64 cross(cid:173)\nbar synapse matrix.  Each neuron is an operational-amplifier operating as a current \nsumming  amplifier.  A circuit  model  of a  neuron with  some synapses  is  shown  in \nFig.  5.  The system  dynamical  equations  are given  by: \n\nwhere  Vi  is  the  output  of the  neuron  i,  Tij  is  the  synaptic weight  from  neuron  j \nto neuron  i,  Rf  and  Cf  are  the feedback  resistance and  capacitance of the neuron, \nl' f  = Rf  Cf,  and  Ii  is  the external  input current.  For our system,  Rf  was  about 50 \nkilo-ohms, and  Cf was about 10 pF, a value large enough to ensure stability against \noscillations.  The  system  was  interfaced  to  a  microcomputer  which  allows \ndownloading of the synaptic weight  data  and analog readout of the neuron states. \n\nc \n\nv\u00b7 I \n\nFigure 5:  Electronic  circuit  model  of neuron  and  synapses. \n\n4 GLOBAL OPTIMIZATION NEURAL NETS \nTwo  combinatorial  global  optimization  problems,  namely, \nthe  one-to-one \nassignment  problem  and  the  traveling  salesman  problem,  were  selected  for  our \nneural  net  hardware \nthe \nperformance  of  the  optimization  network  in  terms  of  the  quality  and  speed  of \nsolutions  in  light  of hardware  limitations. \n\nimplementation  study.  Of  particular  interest \n\nis \n\nIn  the  one-to-one  assignment  problem,  given  two  sets  of N  elements  and  a  cost \nassignment matrix, the objective is  to assign each element in one set to an element \nin  the second  set so  as  to  minimize  the  total  assignment  cost.  In  our neural  net \nimplementation, the network is  a Hopfield-type feedback  net consisting of an  NxN \narray  of assignment  neurons.  In  this  representation,  a  permissible  set  of one-to(cid:173)\none  assignments  corresponds  to  a  permutation  matrix.  Thus,  lateral  inhibition \n\n\f774  Moopenn, Duong and Thakoor \n\nbetween  assignment  neurons  is  employed  to  ensure  that  there  is  only  one active \nneuron in each row and in each column of the neuron array.  To force  the network \nto  favor  assignment  sets  with  low  total  assignment  cost,  each  assignment  neuron \nis  also  given  an  analog  prompt,  that  is,  a  fIxed  analog  excitation  proportional  to \na  positive  constant  minus  its  assignment  cost. \nAn  8-city  Hopfield-Tank  TSP  net  was  implemented  in  the  64-neuron  hardware. \nConvergence statistics were similarly obtained  from  100 randomly generated  8-city \npositions.  The network was  observed  to  give good  solutions  using a large synapse \ngain  (common gate bias= 7 volts) and an annealing time of about one neuron time \nconstant  (- 50  usee).  As  shown  in  Fig.  6b,  the TSP  net  found  tours  which  were \nin  the best  6%.  It  gave  the  best  tours  in  11 % of the  cases  and  the  first  to  third \nbest  tours  in  31%  of  the  cases.  Although  these  results  are  quite  good,  the \nperformance of the TSP  net compares less  favorably with  the assignment net.  This \ncan  be  expected  due  to  the  increased  complexity  of the  TSP  net.  Furthermore, \nsince  the initial state is  arbitrary,  the TSP  net  is  more  likely  to  settle into  a local \nOn  the  other  hand,  in  the \nminimum  before  reaching  the  global  minimum. \nassignment  net, the analog prompt helps  to establish an initial state which  is  close \nto  the  global  minimum,  thereby  increasing  its  likelihood  of  converging  to  the \noptimum  solution. \n\n(a) \n\n30 \n\n25 \n\n5 \n\no \n\no \n000 \n\n00 \no \n\n- . \n\n(b) \n\n35 \n\n30 \n\n25 \n\n10 \n\n5 \n\no \n\n0.01 \n\n0.02 \n\n0. 03 \n\n0.04 \n\n0.05 \n\n0.06 \n\n0.07 \n\nFR~CT ION  OF  BEST  SOLUTIONS \n\n~~ \n\n~~ \n\n~oo \n\n~oo \n\n0.10 \n\nFR~[TlON OF  THE  BEST  SOLUTIONS \n\nFigure  6:  Performance  statistics  for  (a)  7x7  assignment  problem  and  (b)  8-city \ntraveling  salesman  problem. \n\n5  CONCLUSIONS \nCMOS  synapse  chips  based  on  a  hybrid  analog-digital  design  are ideally  suited  as \nbuilding  blocks  for  the  development  of  fully  parallel  and  analog  neural  net \nhardware.  The  chips  described  in  this  paper  feature  1024  synapses arranged  in  a \n32x32  cross-bar  matrix  with  120  programmable  weight  levels  for  each  synapse. \nAlthough  limited  by  the  process  variation  in  the  chip  fabrication,  a  6-bit  weight \nresolution is achieved with our design.  A 64-neuron hardware incorporating several \n\n\fDigital-Analog Hybrid Synapse Chips for Electronic Neural Networks \n\n775 \n\nof  the  synapse  chips  is  fabricated  to  investigate  the  performance  of  feedback \nnetworks in optimization problem solving.  The ability of such networks  to provide \noptimum or near optimum solutions to the one-to-one assignment problem and the \ntraveling salesman  problem  is  demonstrated  in  hardware.  The neural  hardware  is \ncapable  of  providing  real  time  solutions  with  settling  times  in  the  50-500  usec \nIn  an  energy  function  description,  all  valid  assignment  sets  correspond  to  energy \nminima  of equal  depth  located  at  comers  of the  NxN  dimensional  hypercube  (in \nthe large neuron  gain  limit).  The analog  prompt term  in  the energy  function  has \nthe  effect  of  \"tilting\"  the  energy  surface  toward  the  hypercube  corners  with  low \ntotal assignment  cost.  Thus,  the assignment  net may  be  described  as  a  first-order \nglobal  optimization  net  because  the  analog  cost  parameters  appear  only  in  the \nlinear  term  of the energy  function,  Le.,  the  analog  information simply appears  as \nfixed  biases  and  the interaction between  neurons  is  of a  binary nature.  Since  the \nenergy surface contains a large number of local energy minima (-- N!)  there is  the \nstrong possibility that the network will  get trapped in a local  minimum, depending \non  its  initial  state.  Simulated  annealing  can  be  used  to  reduce  this  likelihood. \nOne approach is  to  start with very low neuron gain, and increasing it slowly as  the \nnetwork evolves  to  a  stable state.  An alternative but similar approach  which  can \neasily be implemented with the current hybrid synapse chips is  to gradually increase \nthe  synapse  gain. \n\nA 7X7  one-to-one assignment problem was implemented in  the 64-neuron hardware \nto  investigate the  performance of the assignment optimization net.  An additional \nneuron  was  used  to  provide  the  analog  biases  (quantized  to  6  bits)  to  the \nassignment  neurons.  Convergence  statistics  were  obtained  from  100  randomly \ngenerated  cost  assignment  matrices.  For  each  cost  matrix,  the synapse  gain  and \nannealing  time  were  optimized  and  the  solution  obtained  by  the  hardware  was \nrecorded.  The  network  generally  performed  well  with  a  large  synapse  gain \n(common  gate  bias  of 7  VOlts)  and  an  annealing  time  of about  10  neuron  time \nconstants (- 500 usec).  The unusually  large anneal time observed emphasizes  the \nimportance of suppressing the quadratic energy term while  maintaining the analog \nprompt in the initial course of the network's state trajectory.  Solution distributions \nfor  each  cost  matrix were  also  obtained  from  a  computer search  for  the  purpose \nof  rating  the  hardware  solutions.  The  performance  of  the  assignment  net  is \nsummarized  in  Fig.  6. \nIn all  cases,  the network obtained solutions which  were  in \nthe best 1%.  Moreover, the best solutions were obtained in 40% of the cases,  and \nthe  first,  second,  third  best  in  75%  of  the  cases.  These  results  are  very \nencouraging in spite of the limited resolution of the analog biases and the fact  that \nthe analog  biases  also  vary  in  time with  the synapse  gain. \n\nThe  Hopfield-Tank's  traveling  salesman  problem  (TSP)  network  [6]  was  also \ninvestigated  in  the  64-neuron  hardware.  In  this  implementation,  the  analog  cost \ninformation  (Le.,  the inter-city distances)  is  encoded  in  the connection  strength of \nthe synapses.  Lateral  inhibition  is  provided  via  binary  synapses  to  ensure a  valid \ncity  tour.  However,  the  intercity distance  provides  additional  interaction between \n\n\f776 \n\nMoopenn, Duong and Thakoor \n\nneurons  via  excitatory synapses  with  strength  proportional  to  a  positive  constant \nminus  the  distance.  Thus  the  TSP  net,  considerably  more  complex  than  the \nassignment  net,  may  be described  as  a  second  order global  optimization  net. \nrange,  which  can  be  further  reduced  to  1-10  usec  with  the  incorporation  of on(cid:173)\nchip  neurons. \n\nAcknowledgements \n\nThe  work  described  in  this  paper  was  performed  by  the  Center  for  Space \nMicroelectronics  Technology,  Jet  Propulsion  Laboratory,  California  Institute  of \nTechnology, and was sponsored in part by the Joint Tactical Fusion Program Office \nand  the Defense Advanced  Research Projects  Agency,  through an agreement with \nthe  National  Aeronautics  and  Space  Administration.  The  authors  thank  John \nLambe and Assad Abidi for many useful discussions, and Tim Shaw for  his valuable \nassistance  in  the  Chip-layout  design. \n\nReferences \n\n1. \n\n2. \n\n3. \n\n4. \n\n5. \n\n6. \n\n7. \n\nS.  Eberhardt,  T.  Duong,  and  A  Thakoor,  \"A  VLSI  Analog  Synapse \n'Building  Block'  Chip  for  Hardware  Neural  Network  Implementations,\" \nProc.  IEEE  3rd  Annual  Parallel  Processing  Symp.,  Fullerton,  ed.  L.H. \nCanter,  vol.  1,  pp.  257-267,  Mar.  29-31,  1989. \nA  Moopenn, AP. Moopenn, and T. Duong, \"Digital-Analog-Hybrid Neural \nSimulator: A Design Aid for Custom VLSI Neurochips,\"  Proc. SPIE Conf. \nHigh  Speed  Computing,  Los  Angeles,  ed.  Keith  Bromley,  vol.  1058,  pp. \n147-157,  Jan.  17-18,  1989. \nM.  Holler,  S.  Tam,  H.  Castro,  R.  Benson,  \"An  Electrically  Trainable \nArtificial  Neural Network (ETANN) with 10240 'Floating Gate' Synapses,\" \nProc.  IJCNN,  Wash.  D.C.,  vol.  2,  pp.  191-196,  June  18-22,  1989. \nA.P.  Thakoor,  A  Moopenn,  J.  Lambe,  and  S.K.  Khanna,  \"Electronic \nHardware Implementations of Neural Networks,\" Appl.  Optics, vol.  26,  no. \n23,  1987,  pp.  5085-5092. \nS.  Thakoor,  A.  Moopenn,  T.  Daud,  and  AP.  Thakoor,  \"Solid  State  Thin \nFilm  Memistor  for  Electronic  Neural  Networks,\"  J.  Appl.  Phys.  1990  (in \npress). \nJ.J.  Hopfield  and  D.W.  Tank,  \"Neural  Computation  of  Decisions  in \nOptimization  Problems,\"  BioI.  Cybern.,  vol.  52,  pp.  141-152,  1985. \nJ.J.  Hopfield,  \"Neurons  with  Graded  Response  Have  Collective \nComputational  Properties Like Those of Two-State  Neurons,\"  Proc.  Nat'l \nAcad.  Sci.,  vol.  81,  1984,  pp.  3088-3092. \n\n\f", "award": [], "sourceid": 239, "authors": [{"given_name": "Alexander", "family_name": "Moopenn", "institution": null}, {"given_name": "T.", "family_name": "Duong", "institution": null}, {"given_name": "A.", "family_name": "Thakoor", "institution": null}]}