{"title": "Active Noise Canceling Using Analog Neuro-Chip with On-Chip Learning Capability", "book": "Advances in Neural Information Processing Systems", "page_first": 664, "page_last": 670, "abstract": null, "full_text": "Active Noise Canceling using Analog Neuro(cid:173)\nChip with On-Chip Learning Capability \n\nJung-Wook Cho and Soo-Young Lee \n\nComputation and Neural Systems Laboratory \n\nDepartment of Electrical Engineering \n\nKorea Advanced Institute of Science and Technology \n\n373-1  Kusong-dong, Yusong-gu, Taejon 305-701, Korea \n\nsylee@ee.kaist.ac.kr \n\nAbstract \n\nA modular analogue neuro-chip set with  on-chip learning capability  is \ndeveloped  for  active  noise  canceling.  The  analogue  neuro-chip  set \nincorporates  the  error  backpropagation  learning  rule  for  practical \napplications,  and  allows  pin-to-pin  interconnections  for  multi-chip \nboards.  The  developed  neuro-board  demonstrated  active  noise \ncanceling  without  any  digital  signal  processor.  Multi-path  fading  of \nacoustic  channels,  random  noise,  and  nonlinear distortion  of the  loud \nspeaker  are  compensated  by  the  adaptive  learning  circuits  of  the \nneuro-chips. Experimental  results  are  reported  for  cancellation  of car \nnoise in  real time. \n\n1 \n\nINTRODUCTION \n\nBoth  analog  and  digital  implementations  of  neural  networks  have  been  reported. \nDigital  neuro-chips  can  be  designed  and  fabricated  with  the  help  of well-established \nCAD  tools  and  digital  VLSI  fabrication  technology  [1].  Although  analogue  neuro(cid:173)\nchips have potential  advantages on  integration density  and  speed  over digital  chips[2], \nthey  suffer  from  non-ideal  characteristics  of the  fabricated  chips  such  as  offset  and \nnonlinearity,  and  the  fabricated  chips  are  not  flexible  enough  to  be  used  for  many \ndifferent  applications.  Also,  much  careful  design  is  required,  and  the  fabricated  chip \ncharacteristics are fairly dependent upon fabrication processes. \n\nFor the  implementation of analog neuro-chips, there exist two different approaches, i.e., \nwith  and  without  on-chip  learning  capability  [3,4],  Currently  the  majority  of analog \nneuro-chips  does  not  have  learning  capability,  while  many  practical  applications \nrequire  on-line  adaptation  to  continuously  changing environments,  and  must  have  on(cid:173)\nline  adaptation  learning  capability.  Therefore  neuro-chips  with  on-chip  learning \ncapability  are  essential  for  such  practical  applications.  Modular  architecture  is  also \n\n\fActive Noise Canceling with Analog On-Chip Learning Neuro-Chip \n\n665 \n\nadvantageous to  provide flexibility  of implementing many large complex systems from \nsame chips. \n\nAlthough  many  applications  have  been  studied  for  analog  neuro-chips,  it  is  very \nimportant  to  find  proper  problems  where  analog  neuro-chips  may  have  potential \nadvantages  over  popular  DSPs.  We  believe  applications  with  analog  input/output \nsignals  and  high  computational  requirements  are  those  good  problems.  For example, \nactive noise controls [5]  and adaptive equalizers [6,7]  are good applications for analog \nneuro-chips. \n\nIn  this paper we  report a demonstration of the active noise  canceling,  which  may  have \nmany  applications  in  real  world.  A  modular  analog  neuro-chip  set  is  developed  with \non-chip  learning  capability,  and  a  neuro-board  is  fabricated  from  multiple  chips  with \nPC interfaces for input and output measurements. Unlike our previous implementations \nfor  adaptive  equalizers  with  binary  outputs  [7],  both  input  and  output  values  are \nanalogue in this noise canceling. \n\n..-.---1-1 .... 0 \n\nII'  xl,  ~t---(~.-~-.lI'\"'\"'\"iI---..  (~ \n\n._._.- i') \n\n~ \n\nFigure 1. Block diagram of a synapse cell \n\nFigure 2. Block diagram of a neuron cell \n\n2  ANALOG NEURO-CHIP WITH ON-CHIP LEARNING \n\nWe had  developed analog  neuro-chips with  error backpropagation learning capability. \nWith  the  modular  architecture  the  developed  analog  neuro-chip  set  consists  of  a \nsynapse  chip  and  a  neuron  chip.[8]  The  basic  cell  of the  synapse  chip  is  shown  in \nFigure  1.  Each synapse  cell  receives  two  inputs,  i.e.,  pre-synaptic  neural  activation  x \nand error correction term 8,  and generates two outputs, i.e.,  feed-forward  signal wx and \nback-propagated  error  w8.  Also  it  updates  a  stored  weight  w  by  the  amount  of x8. \nTherefore, a synapse cell consists of three multiplier circuits and  one analogue storage \nfor the synaptic weight. Figure 2 shows the basic cell in the neuron chip, which collects \nsignals from synapses in  the previous layer and distributes to synapses in the following \nlayer.  Each neuron body receives two inputs, i.e., post-synaptic  neural  activation 0 and \nback-propagated  error  8  from  the  following  layer,  and  generates  two  outputs,  i.e., \nSigmoid-squashed neural  activation  0  and  a  new  backpropagated error 8 multiplied  by \na  bell-shaped  Sigmoid-derivative.  The  backpropagated  error  may  be  input  to  the \nsynapse cells in the previous layer. \n\nTo  provide easy  connectivity  with  other chips,  the  two  inputs  of the  synapse  cell  are \nrepresented  as  voltage,  while  the  two  outputs  are  as  currents  for  simple  current \nsummation. On the other hand the inputs and outputs of the neuron cell are represented \nas currents and voltages, respectively. For simple pin-to-pin connections between chips, \none  package  pin  is  maintained  to  each  input  and  output  of  the  chip.  No  time-\n\n\f666 \n\nJ.-W Cho and s.-Y. Lee \n\nmultiplexing  is  introduced,  and  no  other  control  is  required  for  multi-chip  and  multi(cid:173)\nlayer systems.  However,  it makes  the  number of package pins the  main  limiting factor \nfor the number of synapse and neuron cells in the developed chip sets. \n\nAlthough  many  simplified  multipliers  had  been  reported  for  high-density  integration, \ntheir performance is  limited in linearity,  resolution,  and speed. For on-chip learning, it \nis  desirable  to  have  high  precision,  and  a  faithful  implementation  of the  4-quadranr \nGilbert multipliers  is  used.  Especially, the  mUltiplier for weight updates in  the  synapse \ncell  requires  high  precision.[9]  The  synaptic  weight  is  stored  on  a  capacitor,  and  an \nMaS switch is  used to  allow current flow from  the  multiplier to  the capacitor during a \nshort time interval for weight adaptation.  For applications like active noise controls [5] \nand  telecommunications  [6,7],  tapped  analog  delay  lines  are  also  designed  and \nintegrated  in  the  synapse chip. To reduce  offset accumulation,  a  parallel  analog delay \nline  is  adopted.  Same  offset  voltage  is  introduced  for  operational  amplifiers  at  all \nnodes  [10] .  Diffusion  capacitors  with  2.2  pF  are  used  for  the  storage  of the  tapped \nanalog delay line. \n\nIn  a  synapse  chip  250  synapse  cells  are  integrated  in  a  25xl0  array  with  a  25-tap \nanalog  delay  line.  Inputs  may  be  applied  either  from  the  analog  delay  line  or  from \nexternal  pins  in  parallel.  To  select  a  capacitor  in  the  cell  for  refresh,  decoders  are \nplaced  in  columns  and  rows.  The  actual  size  of the  synapse  cell  is  14111m  x  17911m, \nand  the  size  of the  synapse  chip  is  5.05mm  x  5.05mm.  The  chip  is  fabricated  in  a \n0.811m  single-poly  CMOS  process.  On  the  other  hand,  the  neuron  chip  has  a  very \nsimple  structure,  which  consists  of 20  neuron  cells  without  additional  circuits.  The \nSigmoid  circuit  [3]  in  the  neuron  cell  uses  a  differential  pair,  and  the  slope  and \namplitude  are  controlled  by  a  voltage-controlled  resistor  [II].  Sigmoid-derivative \ncircuit is also using differential pair with  min-select circuit. The size of the  neuron cell \nis  177.2I1m x 62.4l1m. \n\nPC \n\nDSP \n\nTMS320C51 \n\nSynapse \nChip \nNeuron \nChip \n\nPC \n\n, \n\n~ _ '  Target \n\nI \n\nN  t-.,.' ----t~ \n\nI  Output \n\nI \n\n: \n\nI  N \n-:-r!1\"iJ-[B~h:---'-: _._.-._-\n... _ ~Q_-I_\"\"', __  -L-___  --I \n\nInput \n\nI  r:1-r:\"1' \n___ ;--..; L!..r~ \n\nGDAB \n\ntv.c  :  32ch \nArIC  :  48<:h \n:  16bitll \nD1 \nDO  :  48bitll \n\nANN  Board \n\nFigure 3: Block diagram of the analog neuro-board \n\n\fActive Noise Canceling with Analog On-Chip Learning Neuro-Chip \n\n667 \n\nUsing  these  chip  sets,  an  analog  neuro-system  is  constructed.  Figure  3  shows a  brief \nblock  diagram  of  the  analog  neuro-system,  where  an  analogue  neuro-board  is \ninterfaced to  a host computer through a GDAB  (General Data Acquisition Board). The \nGDAB  board is specially designed for the data interface with the analogue neuro-chips. \nThe  neuro-board  has  6  synapse  chips and  2  neuron  chips  with  the  2-layer  Perceptron \narchitecture.  For test  and  development purposes,  a  DSP,  ADC  and  DAC  are  installed \non the neuro-board to refresh and adjust weights. \n\nForward  propagation  time  of the  2  layers  Perceptron  is  measured  as  about  30 f..lsec. \nTherefore  the  computation  speed  of  the  neuro-board  is  about  266  MCPS  (Mega \nConnections  Per  Second)  for  recall  and  about  200  MCUPS  (Mega  Connections \nUpdates Per Second) for  error backpropagation  learning. To achieve this  speed with  a \nDSP,  about  400  MIPS  is  required  for  recall  and  at  least  600  MIPS  for  error-back \npropagation learning. \n\nC1 (z) \nChannel \n\nSignal \n\nError \n\nNoise \nSource \n\nAdaptive  Filter \n\nor \n\nMultilayer  Perceptron \n\nFigure 4: Structure of a feedforward active noise canceling \n\n3  ACTIVE NOISE CANCELING USING NEURO-CHIP \n\nBasic architecture of the  feed forward  active  noise  canceling is  shown  in  Figure 4.  An \narea near the microphone is  called \"quiet zone,\" which actually  means  noise should be \nsmall  in  this  area.  Noise  propagates  from  a  source  to  the  quiet  zone  through  a \ndispersive  medium,  of which  characteristics  are  modeled  as  a finite  impulse  response \n(FIR)  filter  with  additional  random  noise.  An  active  noise  canceller  should  generate \nelectric signals for  a loud speaker,  which creates acoustic signals to cancel the  noise at \nthe  quiet zone.  In  general  the  electric-to-acoustic  signal  transfer characteristics  of the \nloud  speaker  is  nonlinear,  and  the  overall  active  noise  canceling  (ANC)  system  also \nbecomes  nonlinear.  Therefore,  multilayer  Perceptron  has  a  potential  advantage  over \npopular  transversal  adaptive  filters  based  on \nlinear-mean.-square  (LMS)  error \nminimization. \n\nExperiments  had  been conducted for  car  noise  canceling.  The reference  signal  for  the \nnoise  source  was  extracted from  an  engine room,  while  a compact car  was  running  at \n60  kmlhour.  The  difference  of  the  two  acoustic  channels,  i.e., H(z) = C1 (z) / C2 ( z)  , \naddition  noise  n,  and  nonlinear  characteristics  of  the \nloud  speaker  need  be \ncompensated. Two  different acoustic  channels  are  used  for  the experiments. The  first \nchannel  Hl (z) = 0.894 + 0.447z-1  is  a  minimum  phase  channel,  while  the  second  non-\n\n\f668 \n\nJ.-W Cho  and S-Y. Lee \n\nchannel  H2 (z) = 0.174 + 0.6z -I + 0.6z -2 + 0.174z -3 \n\nminimum  phase \ncharacterizes \nfrequency-selective  multipath  fading  with  a  deep  spectral  amplitude  null.  A  simple \ncubic  distortion  model  was  used  for  the  characteristics  of the  loud  speaker.[12]  To \ncompare  performance  of the  neuro-chip  with  digital  processors,  computer  simulation \nwas  first  conducted  with  error  backpropagation  algorithm  for  a  single  hidden-layer \nPerceptron  as  well  as  the  LMS  algorithm  for  a  transversal  adaptive  filter.  Then,  the \nsame  experimental  data  were  provided  to  the  developed  neuro-board  by  a  personal \ncomputer through the GDAB. \n\n-5 \n\no \nro \na:: \nc o \ng -10  . \n\n:.;:; \n\n\"0 \nCI.> \n0::: \nCI.> \n.~ -15 \no \nz \n\n- 20 ~-----~------~----~------~--=-~ \n25 \n\n10 \n\n20 \n\n15 \n\n5 \n\no \n\nSignal-to-Distortion  Ratio \n\n(a) \n\n-5 \n\no \n15 \n0::: \nc \no \n....... \n~ -10 \n\"0 \nCI.> \n0::: \nCI.> \n.~ -15 \no \nz \n\n-20~----~----~----~----~----~ \n25 \n\n20 \n\no \n\n15 \n\n10 \n\n5 \n\nSignal-to-Distortion Ratio \n\n(b) \n\nFigure 5:  Noise Reduction Ratio (dB) versus Signal-to-Distortion Ratio (dB) for (a) a \nsimple acoustic channel  HI (z)  and (b) a multi-path fading acoustic channel  H2 (z)  _ \nHere, '+', '*', 'x', and '0' denote results ofLMS algorithm, neural networks simulation, \nneural network simulation with 8-bit input quantization, and neuro-chips, respectively_ \n\n\fActive Noise Canceling with Analog On-Chip Learning Neuro-Chip \n\n669 \n\nResults  for  the  channels  HI (z)  and  H2 (z)  are  shown  in  Figures  5(a)  and  5(b), \nrespectively.  Each  point  in  these  figures  denotes  the  result  of one  experiment  with \ndifferent  parameters.  The  horizontal  axes  represent  Signal-to-Distortion  Ratio  (SDR) \nof the  speaker  nonlinear  characteristics.  The  vertical  axes  represent  Noise  Reduction \nRatio  (NRR)  of  the  active  noise  canceling  systems.  As  expected,  severe  nonlinear \ndistortion  of the  loud  speaker resulted  in  poor noise canceling for  the  LMS  canceller. \nHowever,  the  performance  degradation  was  greatly  reduced  by  neural  network \ncanceller.  With  the  neuro-chips  the  performance  was  worse  than  that  of  computer \nsimulation. Although  the  neuro-chip  demonstrated  active  noise  canceling  and  worked \nbetter than  LMS  cancellers  for  very  small  SDRs,  i.e., very  high  nonlinear distortions, \nits  performance  became  saturated  to  -8  dB  and  -5  dB  NRRs,  respectively.  The \nperformance  saturation  was  more  severe for  the  harder problem  with  the  complicated \nH 2 (z )  channel. \n\nThe performance degradation with  neuro-chips  may  come from  inherent limitations of \nanalogue  chips  such  as  limited  dynamic  ranges  of  synaptic  weights  and  signals, \nunwanted  offsets  and  nonlinearity,  and  limited  resolution  of  the  learning  rate  and \nsigmoid slope. [9]  However, other side effects of the GDAB  board, i.e., fixed resolution \nof AID converters and D/A converters for data 110,  also contributed to  the performance \ndegradation.  The  input  and  output  resolutions  of the  GDAB  were  J 6  bit  and  8  bit, \nrespectively.  Unlike  actual  real-world  systems  the  input  values  of the  experimental \nanalogue  neuro-chips  are  these  8-bit quantized  values.  As  shown  in  Figures  5, results \nof the  computer simulation  with  8-bit quantized  target  values  showed  much  degraded \nperformance  compared  to  the  floating-point  simulations.  Therefore,  a  significant \nportion  of  the  poor  performance  in  the  experimental  analogue  system  may  be \ncontributed from  the  AID  converters,  and  the  analogue system may  work better in real \nworld systems. \n\nActual  acoustic  signals  are  plotted  in  Figure  6. The  top,  middle,  and  bottom  signals \ndenote noise , negated speaker signal, and residual noise at the quiet zone, respectively. \n\nFigure 6:  Examples of noise, negated loud-speaker canceling signal, and residual error \n\n\f670 \n\n4  CONCLUSION \n\nJ.-w.  Cho and s.-Y. Lee \n\nIn this paper we report an experimental results of active noise canceling using analogue \nneuro-chips  with  on-chip  learning  capability.  Although  the  its  performance  is  limited \ndue  to  nonideal  characteristics  of analogue  chip  itself and  also  peripheral  devices,  it \nclearly demonstrates feasibility of analogue chips for real world applications. \n\nAcknowledgements \n\nThis  research  was  supported  by  Korean  Ministry  of  Information  and  Tele(cid:173)\ncommunications. \n\nReferences \n\n[1]  T.  Watanabe,  K.  Kimura,  M.  Aold,  T.  Sakata  &  K.  Ito  (1993)  A  Single  1.5-V \nDigital  Chip  for  a  106  Synapse  Neural  Network,  IEEE  Trans.  Neural  Network, \nVolA, No.3, pp.387-393. \n\n[2J  T.  Morie  and  Y.  Amemiya  (1994)  An  All-Analog  Expandable  Neural  Network \nLSI  with  On-Chip  Backpropagation  Learning,  IEEE  Journal  of  Solid  State \nCircuits, vo1.29,  No.9, pp.1086-1093. \n\n[3J  J.-W.  Cho,  Y.  K.  Choi,  S.-Y.  Lee  (1996)  Modular  Neuro-Chip  with  On-Chip \nLearning  and  Adjustable  Learning Parameters,  Neural  Processing  Letters,  VolA, \nNo.1. \n\n[4J  J.  Alspector,  A.  Jayakumar, S. Luna (1992) Experimental evaluation of learning in \nneural  microsystem,  Advances  in  Neural  Information  Processing  Systems  4,  pp. \n871-878 . \n\n[5 J B.  Widrow,  et al.  (1975)  Adative  Noise Cancelling:  Principles  and  Applications, \n\nProceeding of IEEE,  Vo1.63, No.12, pp.1692-1716. \n\n[6]  J.  Choi,  S.H.  Bang,  BJ.  Sheu  (1993)  A  Programmable  Analog  VLSI  Neural \nNetwork  Processor  for  Communication  Receivers,  IEEE  Transaction  on  Neural \nNetwork,  VolA, No.3, ppA84-495. \n\n[7J  J.-W.  Cho  and  S.-Y.  Lee  (1998)  Analog  neuro-chips  with  on-chip  learning \ncapability  for  adaptive  nonlinear equalizer,  Proc.  lJCNN,  pp.  581-586,  May  4-9, \nAnchorage,  USA. \n\n[8J  J.  Van der Spiegel, C.  Donham,  R.  Etienne-Cummings, S.  Fernando (1994) Large \nscale  analog  neural  computer with  programmable  architecture  and  programmable \ntime constants for  temporal pattern analysis, Proc.  ICNN,  pp.  1830-1835. \n\n[9J  Y.K.  Choi,  K.H.  Ahn,  and  S.Y.  Lee  (1996)  Effects  of multiplier  offsets  on  on(cid:173)\nchip learning for analog neuro-chip, Neural Processing Letters, vol.  4,  No.1,  1-8. \n\n[1OJ  T.  Enomoto,  T.  Ishihara  and  M.  Yasumoto  (1982)  Integrated  tapped  MaS \nanalogue  delay  line  using  switched-capacitor  technique,  Electronics  Lertters, \nVo1.l8, pp.193-194. \n\n[11 J P.B.  Allen,  D.R.  Holberg  (1987)  CMOS Analog  Circuit  Design,  Holt,  Douglas \n\nRinehart and Winston. \n\n[12J  F.  Gao  and  W.M.  Snelgrove  (1991)  Adaptive  linearization  of  a  loudspeaker, \nProc.  International  Conference  on  Acoustics,  Speech  and Signal processing,  pp. \n3589-3592. \n\n\f", "award": [], "sourceid": 1541, "authors": [{"given_name": "Jung-Wook", "family_name": "Cho", "institution": null}, {"given_name": "Soo-Young", "family_name": "Lee", "institution": null}]}