{"title": "A Model of the Phonological Loop: Generalization and Binding", "book": "Advances in Neural Information Processing Systems", "page_first": 83, "page_last": 90, "abstract": null, "full_text": "A  Model of the Phonological  Loop: \n\nGeneralization and  Binding \n\nRandall C.  O'Reilly \n\nDepartment of Psychology \n\nUniversity of Colorado Boulder \n\n345  UCB \n\nBoulder, CO  80309 \n\noreilly@psych.colorado.edu \n\nRodolfo  Soto \n\nDepartment of Psychology \n\nUniversity of Colorado Boulder \n\n345  UCB \n\nBoulder, CO  80309 \n\nAbstract \n\nWe present a  neural network model that shows how the prefrontal \ncortex, interacting with the basal ganglia, can maintain a sequence \nof  phonological  information  in  activation-based  working  memory \n(i.e.,  the  phonological  loop).  The  primary function  of this  phono(cid:173)\nlogical  loop  may  be  to  transiently  encode  arbitrary  bindings  of \ninformation  necessary  for  tasks  -\nthe  combinatorial  expressive \npower  of language  enables  very  flexible  binding  of essentially  ar(cid:173)\nbitrary  pieces  of information.  Our  model  takes  advantage  of the \nclosed-class nature of phonemes, which allows  different  neural rep(cid:173)\nresentations of all possible phonemes at each sequential position to \nbe encoded.  To  make this work,  we  suggest that the basal ganglia \nprovide a  region-specific update signal that allocates phonemes to \nthe appropriate sequential coding slot.  To  demonstrate that flexi(cid:173)\nble,  arbitrary binding of novel  sequences can be supported by this \nmechanism,  we  show  that  the  model  can  generalize  to  novel  se(cid:173)\nquences after  moderate amounts of training. \n\n1 \n\nIntroduction \n\nSequential  binding  is  a  version  of the  binding  problem requiring  that  the  identity \nof an item and its position  within a  sequence be bound.  For example, to encode a \nphone  number  (e.g.,  492-0054),  one  must  remember  not  only  the  digits,  but their \norder  within  the  sequence.  It has  been  suggested  that  the  brain  may  have  devel(cid:173)\noped  a  specialized  system  for  this  form  of binding  in  the  domain  of phonological \nsequences,  in  the form  of the phonological  loop  (Baddeley,  1986;  Baddeley,  Gather(cid:173)\ncole,  &  Papagno, 1998; Burgess &  Hitch, 1999).  The phonological loop is  generally \nconceived of as  a  system that can quickly encode a sequence of phonemes and then \nrepeat  this  sequence  back  repeatedly.  Standard  estimates  place  the  capacity  of \nthis  loop  at  about  2.5  seconds  of  \"inner  speech,\"  and  it  is  widely  regarded  as  de(cid:173)\npending  on  the  prefrontal  cortex  (e.g. ,  Paulesu,  Frith,  &  Frackowiak,  1993).  We \nhave  developed  a  model  of the phonological loop  based on our existing framework \nfor  understanding how  the prefrontal cortex and  basal ganglia interact to support \n\n\factivation-based working memory  (Frank,  Loughry,  &  O'Reilly,  2001).  This model \nperforms  binding  by  using  different  neural  substrates  for  the  different  sequential \npositions  of  phonemes.  This  is  a  viable  solution  for  a  small,  closed-class  set  of \nitems like  phonemes.  However, through the combinatorial power of language, these \nphonological  sequences  can  represent  a  huge  number  of  distinct  combinations  of \nconcepts.  Therefore,  this  basic  maintenance mechanism  can be leveraged  in  many \ndifferent circumstances to bind information needed for  immediate use (e.g., in  work(cid:173)\ning  memory tasks). \n\nA  good example of this form  of transient,  phonologically-dependent binding comes \nfrom  a  task studied by Miyake and Soto  (in preparation).  In  this task, participants \nsaw sequentially-presented colored letters one at a  time on a  computer display, and \nhad to respond to targets of a  red X or a  green Y,  but not to any other color-letter \ncombination (e.g., green X's and red Y's, which were also presented).  After an initial \nseries  of trials  with  this  set  of targets,  the  targets  were  switched  to  be  a  green  X \nand a  red Y. Thus, the task clearly requires binding of color and letter information, \nand  updating  of  these  bindings  after  the  switch  condition.  Miyake  and  Soto  (in \npreparation) found  that if they simply had participants repeat the word  \"the\"  over \nand  over  during  the  task  (i.e.,  articulatory  suppression),  it interfered  significantly \nwith  performance.  In  contrast, performing a  similar repeated motor response that \ndid  not  involve  the  phonological  system  (repeated  foot  tapping)  did  not  interfere \n(but this task did interfere at the same level as articulatory suppression in a control \nvisual search task, so one cannot argue that the interference was simply a matter of \ndifferential task difficulty).  Miyake and Soto  (in preparation) interpret this pattern \nof results  as  showing that  the  phonological  loop  supports  the  binding  of stimulus \nfeatures  (e.g.,  participants repeatedly say to themselves  \"red X,  green y' .. \" ,  which \nis  supported  by  debriefing  reports),  and  that  the  use  of this  phonological  system \nfor  unrelated  information  during  articulatory  suppression  leads  to  the  observed \nperformance deficits. \n\nThis  form  of phonological  binding  can  be  contrasted  with  other  forms  of binding \nthat can be used in other situations and subserved by other brain areas besides the \nprefrontal cortex.  O'Reilly, Busby, and Soto (in press) identify two other important \nbinding mechanisms and their neural substrates in addition to the phonological loop \nmechanism: \n\n\u2022  Cortical  coarse-coded  conjunctive  binding:  This  is  where  each  neural  unit \n\ncodes  in  a  graded  fashion  for  a  large  number  of relatively  low-order  con(cid:173)\njunctions, and many such units are used to represent any given input  (e.g., \nWickelgren,  1969;  Mel  &  Fiser,  2000;  O'Reilly  &  Busby,  2002).  This form \nof binding takes place within the basic representations in the network that \nare shaped by gradual learning processes and provides a long-lasting (non(cid:173)\ntransient)  form  of binding.  In  short,  these  kinds  of distributed  represen(cid:173)\ntations  avoid  the  binding  problem  in  the  first  place  by  ensuring  that  rel(cid:173)\nevant  conjunctions  are  encoded,  instead  of representing  different  features \nusing  entirely  separate,  localist  units  (which  is  what  gives  rise  to  binding \nproblems in  the first  place).  However,  this form  of binding cannot rapidly \nencode  novel  bindings  required  for  specific  tasks -\nthe  phonological loop \nmechanism can thus complement the basic cortical mechanism by providing \nflexible,  transient bindings on an ad-hoc basis. \n\n\u2022  Hippocampal  episodic  conjunctive  binding:  Many  theories  of  hippocampal \nfunction  converge on the idea that it binds  together individual elements of \nan experience into a  unitary representation, which can for  example be later \nrecalled from  partial cues  (see  O'Reilly &  Rudy,  2001  for  a  review).  These \nhippocampal  conjunctive  representations  are  higher-order  and  more  spe-\n\n\fcific  than the lower-order coarse-coded cortical conjunctive representations \n(i.e.,  a  hippocampal conjunction encodes  the combination of many feature \nelements,  while  a  cortical  conjunction  encodes  relatively  few).  Thus,  the \nhippocampus can be seen as a specialized system for  doing long-term bind(cid:173)\ning  of specific  episodes,  complementing  the  more  generalized  conjunctive \nbinding  performed  by  the  cortex.  Importantly,  the  hippocampus  can  also \nencode these conjunctions rapidly, and therefore it shares some of the same \nfunctionality  as  the  phonological  loop  mechanism  (i.e.,  rapidly  encoding \narbitrary conjunctions  required for  tasks).  Thus,  it  is  likely  that  the  hip(cid:173)\npocampus and the prefrontal-mediated working memory system  (including \nthe  phonological  loop)  are  partially redundant  with  each other,  and  work \ntogether in many tasks  (Cohen &  O'Reilly,  1996). \n\n2  Prefrontal Cortex and  Basal Ganglia in Working Memory \n\nOur model of the phonological loop takes advantage of recent work showing how the \nprefrontal cortex and basal ganglia can interact to support activation-based working \nmemory (Frank et al., 2001).  The critical principles behind this work are as follows: \n\n\u2022  Prefrontal  cortex  (PFC)  is  specialized  relative  to  the  posterior  cortex  for \nrobust  and rapidly  updatable maintenance of information in an active state \n(i.e.,  via persistent  firing  of neurons).  Thus,  PFC  can  quickly  update  to \nmaintain new  information  (in  this  case,  the one  exposure to a  sequence of \nphonemes),  while  being  able  to  also  protect  maintained  information from \ninterference from  ongoing processing (see  O'Reilly, Braver, &  Cohen,  1999; \nCohen, Braver, &  O'Reilly, 1996; Miller &  Cohen, 2001 for elaborations and \nreviews  of relevant  data). \n\n\u2022  Robust  maintenance  and  rapid  updating  are  in  fundamental  conflict,  and \nrequire  a  dynamic  gating  mechanism  that  can  switch  between  these  two \nmodes of operation  (O'Reilly et  al.,  1999;  Cohen et  al.,  1996). \n\n\u2022  The  basal  ganglia  (BG)  can  provide  this  dynamic  gating  mechanism  via \nmodulatory,  dis inhibitory  connectivity  with  the  PFC.  Furthermore,  this \nBG-based  gating  mechanism  provides  selectivity,  such  that  separate  re(cid:173)\ngions  of  the  PFC  can  be  independently  updated  or  allowed  to  perform \nrobust  maintenance.  A  possible  anatomical  substrate for  these  separably \nupdatable PFC regions are the stripe structures identified by Levitt,  Lewis, \nYoshioka,  and Lund  (1993). \n\n\u2022  Active maintenance in the PFC is  implemented via a  combination of recur(cid:173)\n\nrent excitatory connections and intracellular excitatory ionic conductances. \nThis  allows  the  PFC  units  to  generally  reflect  the  current  inputs,  except \nwhen  these  units  have  their  intracellular  maintenance  currents  activated, \nwhich causes them to reflect previously maintained information.  See  Frank \net al.  (2001)  for  more details  on the importance of this mechanism. \n\n3  Phonological Loop  Model \n\nThe  above  mechanisms  motivated  our  modeling  of the  phonological  loop  as  fol(cid:173)\nlows  (see  Figure  1) .  First,  separate  PFC  stripes  are  used  to  encode  each  step  in \nthe  sequence.  Thus,  binding  of  phoneme  identity  and  sequential  order  occurs  in \nthis  model by  using distinct  neural substrates to represent  the sequential informa(cid:173)\ntion.  This  is  entirely  feasible  because  each stripe  can  represent  all  of the possible \nphonemes, given that they represent a closed class of items.  Second, the storage of a \n\n\fFigure 1:  Phonological loop  model.  Ten  different  input symbols are possible at each time \nstep  (one unit  out of ten activated in  the Input layer) .  A sequence is  encoded in  one pass \nby  presenting the  Input together  with  the sequential location  in  the Time  input layer  for \neach  step in  the sequence.  The simulated  basal  ganglia  gating  mechanism  (implemented \nby  fiat  in  script  code)  uses  the  time  input  to  trigger  intracellular  maintenance  currents \nin  the  corresponding  stripe  region  of  the  context  (PFC)  layer  (stripes  are  shown  as  the \nthree  separate  groups  of  units  within  the  Context  layer;  individual  context  units  also \nhad  an  excitatory  self-connection  for  maintenance).  Thus,  the first  stripe  must  learn  to \nencode  the  first  input,  etc.  Immediately  after  encoding,  the  network  is  then  trained  to \nproduce  the  correct  output  in  response  to  the  time  input,  without  any  Input  activation \n(the activation state shown is the network correctly recalling the third item in a sequence). \nThe hidden layer must therefore learn to decode the context representations for  this recall \nphase.  Generalization  testing involved presenting untrained sequences. \n\nnew sequence involves the basal ganglia gating mechanism triggering updates of the \ndifferent PFC stripes in the appropriate order.  We  assume this can be learned over \nexperience,  and  we  are  currently working  on  developing  powerful  learning  mecha(cid:173)\nnisms  for  adapting the  basal  ganglia gating  mechanism  in  this  way.  This  kind  of \ngating control would also likely require some kind of temporal/sequential input that \nindicates the location within the sequence -\nsuch information might come from the \ncerebellum  (e.g.,  Ivry,  1996). \n\nIn advance of having developed realistic and computationally powerful mechanisms \nfor  both  the  learning  and  the  temporal/sequential  control  aspects  of the  model, \nwe  simply  implemented  these  by  fiat  in  the  simulator.  For  the  temporal  signal \nindicating location  within  the sequence,  we  simply  activated a  different  individual \ntime  unit  for  each  point  in  the  sequence  (the  Time  input  layer  in  Figure  1).  This \nsignal was then used by a  simulated gating mechanism (implemented in script code \nin the simulator) to update the corresponding stripe in prefrontal cortex.  Although \nthe resulting model was  therefore simplified,  it  nevertheless  still  had a  challenging \nlearning  task  to  perform.  Specifically,  the  stripe  context  layers  had  to  learn  to \nencode and maintain the current input value properly,  and the Hidden layer had to \nbe able to decode the context layer information as a function of the time input value. \nThe model was implemented using the Leabra algorithm with standard parameters \n(O'Reilly,  1998;  O 'Reilly &  Munakata,  2000). \n\n\fPhonological Loop Generalization \n\n0.3 \n\n.. e \nali  0.2 \nI: \n.S! \n\"lii \n.. Q) \n.!::! \niij \n\n0.1 \n\nI: \nQ) \nCl \n\n0.0 \n\n100 \n\n200 \n\n300 \n\n800 \n\nNumber of Training  Events \n\nFigure 2:  Generalization results for  the phonological  loop  model  as  a function  of number \ntraining  patterns.  Generalization  is  over  90%  correct  with  training  on  less  than  20%  of \nthe possible input patterns.  N  = 5. \n\n3 .1  Network  Training \n\nThe  network  was  trained  as  follows.  Sequences  (of  length  3  for  our  initial  work) \nwere presented by sequentially activating an input  \"phoneme\"  and a  corresponding \nsequential  location  input  (in  the  Time  input  layer) .  We  only  used  10  different \nphonemes, each of which was encoded locally with a different unit in the Input  layer. \nFor example,  the network could get Time =  0,  Input  =  2,  then Time =  1,  Input  = \n7,  then  Time  =  2,  Input  =  3  to  encode  the  sequence  2,7,3.  During this  encoding \nphase, the network was  trained to activate the current Input  on the  Output layer, \nand  the  simulated  gating  function  simply  activated  the  intracellular  maintenance \ncurrents for  the  units  in the stripe  in  the  Context  (PFC)  layer  that  corresponded \nto  the  Time  input  (i.e.,  stripe  0 for  Time=O,  etc).  Then,  the network was  trained \nto  recall  this  sequence,  during  which  time  no  Input  activation  was  present.  The \nnetwork received  the  sequence of Time  inputs  (0,1,2),  and was  trained  to produce \nthe corresponding Output for  that location in the sequence  (e.g.,  2,7,3).  The PFC \ncontext layers just maintained their activation states based on the intracellular ion \ncurrents activated during encoding  (and recurrent activation)  -\nonce the network \nhas been trained, the active PFC state represents the entire sequence. \n\n3.2  Generalization Results \n\nA  critical test  of the model  is  to  determine  whether  it can perform systematically \nwith novel sequences - only if it  demonstrates this capacity can it serve as a  mech(cid:173)\nanism for  rapidly binding arbitrary information  (such as the task demands studied \nby  Miyake  &  Soto,  in  preparation).  With  10  input  phonemes  and  sequences  of \nlength  t hree,  there  were  1,000  different  sequences  possible  (we  allowed  phonemes \nto  repeat).  We  trained  on  100,  200,  300,  and  800  of these  sequences,  and  tested \ngeneralization  on  the  remaining  sequences.  The  generalization  results  are  shown \nin  Figure  2,  which  clearly  shows  that  the  network  learned  these  sequences  in  a \nsystematic  manner  and  could  transfer  its  training  knowledge  to  novel  sequences. \nInterestingly,  there appears to be a  critical transition between 100 and 200  training \nsequences  -\nsented roughly 10  times,  which appears to provide sufficient  statistical information \nregarding the independence of individual slots. \n\n100  sequences  corresponds  to  each  item  within  each  slot  being  pre(cid:173)\n\n\fFigure 3:  Hidden unit representations (values are weights into a hidden unit from all other \nlayers).  Unit  in  a)  encodes  the conjunction  of a subset  of input/output items  at  time  2. \n(b)  encodes  a  different  subset  of items  at  time  2.  (c)  encodes  items  over  times  2  and 3. \n(d)  has  no  selectivity  in  the input, but does  project  to the output and likely  participates \nin  recall  of items at time step 3. \n\n3.3  Analysis  of Representations \n\nTo  understand how  the hidden  units  encode  and retrieve information in  the main(cid:173)\ntained context layer in  a  systematic fashion  that supports the good generalization \nobserved, we  examined the patterns of learned weights.  Some representative exam(cid:173)\nples  are shown  in  Figure  3.  Here,  we  see  evidence  of coarse-coded representations \nthat encode  a  subset  of items  in  either one time  point in the  sequence or a  couple \nof time points.  Also we found  units that were more clearly associated with retrieval \nand  not  encoding.  These  types  of  representations  are  consistent  with  our  other \nwork showing how  these  kinds  of representations  can  support  good generalization \n(O'Reilly &  Busby, 2002). \n\n4  Discussion \n\nWe  have  presented  a  model  of  sequential  encoding  of  phonemes,  based  on \nindependently-motivated  computational  and  biological  considerations,  focused  on \nthe neural substrates of the prefrontal cortex and basal ganglia (Frank et al.,  2001). \nViewed  in  more  abstract,  functional  terms,  however,  our  model  is  just  another  in \na  long  line  of computational  models  of how  people  might  encode  sequential  order \ninformation.  There are two classic models:  (a)  associative  chaining,  where the acti-\n\n\fvation of a  given item triggers the activation of the next item via associative links, \nand  (b)  item-position  association  models  where items are associated with  their  se(cid:173)\nquential  positions  and  recalled  from  position  cues  (e.g.,  Lee  &  Estes,  1977).  The \nbasic  associative chaining  model  has  been  decisively  ruled out based on error pat(cid:173)\nterns  (Henson,  Norris,  Page,  &  Baddeley,  1996),  but  modified  versions  of it  may \navoid  these  problems  (e.g.,  Lewandowsky  &  Murdock,  1989).  Probably  the  most \naccomplished  current  model,  Burgess  and  Hitch  (1999),  is  a  version  of the  item(cid:173)\nposition association model  with a  competitive queuing  mechanism where the most \nactive item is output first  and is  then suppressed to allow other items to be output. \n\nCompared  to  these  existing  models,  our  model  is  unique  in  not  requiring  fast  as(cid:173)\nsociational  links  to  encode  items  within  the  sequence.  For  example,  the  Burgess \nand Hitch (1999)  model uses rapid weight changes to associate items with a context \nrepresentation that functions  much  like  the  time  input  in  our model.  In  contrast, \nitems are maintained strictly via persistent activation in our model, and the basal(cid:173)\nganglia based gating mechanism  provides  a  means of encoding items  into separate \nneural  slots  that  implicitly  represent  sequential  order.  Thus,  the  time  inputs  act \nindependently  on  the  basal  ganglia,  which  then  operates  generically  on  whatever \nphoneme  information  is  presently  activated  in  the  auditory  input,  obviating  the \nneed for  specific item-context links. \n\nThe clear benefit of not requiring associationallinks is that it makes the model much \nmore  flexible  and  capable  of generalization to  novel  sequences  as  we  have  demon(cid:173)\nstrated here (see  O'Reilly &  Munakata, 2000 for  extended discussion of this general \nissue).  Thus,  we  believe  our model  is  uniquely  well  suited  for  explaining  the  role \nof the  phonological loop  in rapid  binding of novel  task information.  Nevertheless, \nthe present implementation of the model has  numerous shortcomings and simplifi(cid:173)\ncations,  and  does  not  begin  to  approach  the  work of Burgess and  Hitch  (1999)  in \naccounting for  relevant  psychological  data.  Thus,  future  work  will  be  focused  on \nremedying  these  limitations.  One  important  issue  that  we  plan  to  address  is  the \ninterplay between the present  model  based on the  prefrontal cortex and the bind(cid:173)\ning  that  the  hippocampus  can  provide  - we  suspect  that  the  hippocampus  will \ncontribute item-position associations and their associated error patterns and other \nphenomena as discussed in Burgess and Hitch  (1999). \n\nAcknowledgments \n\nThis  work  was  supported  by  ONR  grant  N00014-00-1-0246  and  NSF  grant  IBN-\n9873492.  Rodolfo  Soto  died  tragically at a  relatively young  age  during  the prepa(cid:173)\nration of this manuscript -\n\nthis  work is  dedicated to his  memory. \n\n5  References \n\nBaddeley, A. ,  Gathercole,  S. , &  Papagno,  C.  (1998).  The phonological loop  as  a  language \n\nlearning  device.  Psychological  Review,  105,  158. \n\nBaddeley,  A.  D.  (1986).  Working  memory.  New  York:  Oxford University Press. \nBurgess,  N. ,  &  Hitch,  G.  J .  (1999).  Memory  for  serial  order:  A  network  model  of the \n\nphonological  loop  and its timing.  Psychological  Review,  106,  551- 581. \n\nCohen,  J.  D.,  Braver,  T.  S., &  O'Reilly,  R.  C.  (1996).  A  computational approach to  pre(cid:173)\n\nfrontal  cortex,  cognitive  control,  and schizophrenia:  Recent  developments  and current \nchallenges.  Philosophical Transactions  of the  Royal Society  (London)  B,  351, 1515- 1527. \nCohen,  J.  D. ,  & O'Reilly, R.  C.  (1996).  A  preliminary theory of the interactions between \nprefrontal cortex and hippocampus that contribute to planning and prospective memory. \nIn  M.  Brandimonte,  G.  O.  Einstein,  &  M.  A.  McDaniel  (Eds.) ,  Prospective  memory: \nTheory  and  applications  (pp.  267- 296).  Mahwah,  New  Jersey:  Erlbaum. \n\n\fFrank, M.  J. , Loughry, B.,  &  O 'Reilly, R . C. (2001).  Interactions between the frontal cortex \nand  basal  ganglia  in  working  memory:  A  computational  model.  Cognitive,  Affective, \nand  Behavioral  Neurosci ence,  1 ,  137- 160. \n\nHenson,  R.  N.  A.,  Norris,  D.  G.,  Page,  M.  P.  A.,  &  Baddeley,  A.  D .  (1996) .  Unclaimed \nmemory:  Error  patterns rule out chaining models of immediate serial  recall.  Quarterly \nJournal  of Experimental Psychology:  Human  Experimental Psychology, 49(A) , 80- 115. \nIvry,  R.  (1996).  The  representation  of  temporal  information  in  perception  and  motor \n\ncontrol.  Current  Opinion  in N eurobiology,  6,851-857. \n\nLee,  C.  L. ,  &  Estes,  W.  K.  (1977).  Order  and  position  in  primary  memory  for  letter \n\nstrings.  Journal  of Verbal  Learning  and  Verbal  B ehavior,  16,  395- 418. \n\nLevitt,  J .  B. ,  Lewis,  D.  A. ,  Yoshioka,  T. ,  &  Lund,  J.  S.  (1993).  Topography  of pyrami(cid:173)\ndal  neuron  intrinsic  connections  in  macaque  monkey prefrontal  cortex  (areas  9  &  46). \nJournal  of Comparative  N eurology,  338,  360- 376. \n\nLewandowsky, S. , &  Murdock, B. B.  (1989).  Memory for serial order.  Psychological R eview, \n\n96,  25- 57. \n\nMel,  B.  A.,  &  Fiser,  J.  (2000).  Minimizing  binding errors  using  learned conjunctive fea(cid:173)\n\ntures.  Neural  Computation,  12,  731- 762. \n\nMiller,  E.  K. , &  Cohen,  J.  D.  (2001).  An integrative theory of prefrontal cortex function. \n\nAnnual Review  of Neuroscience,  24,  167- 202. \n\nMiyake,  A.,  &  Soto,  R.  (in  preparation).  The  role  of the  phonological  loop  in  executive \n\ncontrol. \n\nO'Reilly, R.  C.  (1998).  Six  principles  for  biologically-based  computational models  of cor(cid:173)\n\ntical  cognition.  Trends  in  Cognitive  Sciences,  2(11),  455- 462. \n\nO 'Reilly, R . C .,  Braver, T . S.,  &  Cohen,  J . D .  (1999) .  A  biologically  based  computational \nmodel of working memory.  In A.  Miyake, & P.  Shah (Eds.) ,  Models  of working m emory: \nMechanisms  of  active  maintenance  and  executive  control.  (pp.  375- 411) .  New  York: \nCambridge University Press. \n\nO 'Reilly,  R.  C. ,  &  Busby,  R.  S.  (2002).  Generalizable  relational  binding  from  coarse(cid:173)\n\ncoded distributed representations.  Advances  in  N eural  Information  Processing  Systems \n(NIPS),  2001. \n\nO 'Reilly, R . C. , Busby, R.  S., &  Soto, R.  (in press).  Three forms of binding and their neural \nsubstrates:  Alternatives  to  temporal synchrony.  In  A.  Cleeremans  (Ed.) ,  Th e unity  of \nconsciousness:  Binding,  integration,  and dissociation. Oxford:  Oxford University Press. \n\nO 'Reilly,  R.  C .,  &  Munakata,  Y .  (2000) .  Computational  explorations  in  cognitive  n euro(cid:173)\nscience:  Understanding  the mind by  simulating  the  brain.  Cambridge,  MA:  MIT  Press. \nO 'Reilly, R.  C. , &  Rudy,  J.  W .  (2001).  Conjunctive representations in  learning and mem(cid:173)\nory:  Principles  of cortical  and hippocampal function .  Psychological  Review,  108,  311-\n345. \n\nPaulesu,  E. ,  Frith,  C.  D. ,  &  Frackowiak,  R.  S.  J.  (1993).  The  neural  correlates  of the \n\nverbal component of working memory.  Nature,  362,342- 345. \n\nWickelgren, W.  A.  (1969).  Context-sensitive coding, associative memory, and serial  order \n\nin  (speech)  behavior.  Psychological  R eview,  76 , 1- 15. \n\n\f", "award": [], "sourceid": 2025, "authors": [{"given_name": "Randall", "family_name": "O'Reilly", "institution": null}, {"given_name": "R.", "family_name": "Soto", "institution": null}]}