{"title": "Modelling Spatial Recall, Mental Imagery and Neglect", "book": "Advances in Neural Information Processing Systems", "page_first": 96, "page_last": 102, "abstract": null, "full_text": "Modelling spatial recall, mental imagery and \n\nneglect \n\nSuzanna  Becker \n\nDepartment of Psychology \n\nMcMaster University \n1280 Main Street West \n\nHamilton,Ont.  Canada L8S 4Kl \n\nbecker@mcmaster.ca \n\nNeil Burgess \n\nDepartment of Anatomy and \n\nInstitute of Cognitive Neuroscience, UCL \n\n17 Queen Square \n\nLondon, UK WCIN 3AR \n\nn.burgess@ucl.ac.uk \n\nAbstract \n\nWe present a computational model of the neural mechanisms in the pari(cid:173)\netal and  temporal lobes that support spatial navigation, recall of scenes \nand  imagery  of the  products  of recall.  Long  term  representations  are \nstored  in  the  hippocampus,  and  are  associated  with  local  spatial  and \nobject-related features  in the parahippocampal region.  Viewer-centered \nrepresentations are dynamically generated from long term memory in the \nparietal part of the  model.  The model thereby  simulates recall  and  im(cid:173)\nagery  of locations and  objects in  complex environments.  After parietal \ndamage,  the  model exhibits hemispatial neglect in  mental imagery that \nrotates  with  the  imagined perspective of the  observer,  as  in  the  famous \nMilan  Square experiment  [1].  Our model  makes  novel  predictions for \nthe  neural representations  in  the  parahippocampal and  parietal  regions \nand for behavior in healthy volunteers and neuropsychological patients. \n\n1  Introduction \n\nWe  perform spatial computations everday.  Tasks  such as  reaching and navigating around \nvisible  obstacles  are  predominantly  sensory-driven rather  than  memory-based,  and  pre(cid:173)\nsumably rely  upon  egocentric,  or viewer-centered representations of space.  These repre(cid:173)\nsentations,  and  the  ability  to  translate between  them,  have  been accounted for  in  several \ncomputational models  of the parietal cortex e.g.  [2,  3].  In other situations  such as  route \nplanning, recall and imagery for scenes or events one must also reply upon representations \nof spatial layouts from long-term memory.  Neuropsychological and neuroimaging studies \nimplicate both the parietal and hippocampal regions in such tasks [4, 5], with the long-term \nmemory component associated with  the  hippocampus.  The discovery of \"place cells\"  in \nthe  hippocampus [6]  provides evidence that hippocampal representations are  ailocentric, \nin that absolute locations in open spaces are encoded irrespective of viewing direction. \n\nThis paper addresses the nature and source of the spatial representations in the hippocampal \nand parietal regions,  and how  they interact during recall and navigation.  We  assume that \nin the hippocampus proper, long-term spatial memories are stored allocentrically, whereas \nin the parietal cortex view-based images are created on-the-fly during perception or recall. \nIntuitively it makes sense to use an  allocentric representation for long-term storage as  the \n\n\fposition  of the  body  will  have  changed  before  recall.  Alternatively,  to  act  on  a  spatial \nlocation (e.g.  reach with the hand) or to imagine a scene, an egocentric representation (e.g. \nrelative to the hand or retina) is more useful [7, 8]. \n\nA study of hemispatial neglect patients throws  some light on  the interaction of long-term \nmemory  with  mental  imagery.  Bisiach  and  Luzatti  [1]  asked  two  patients  to  recall  the \nbuildings  from  the  familiar  Cathedral  Square  in  Milan,  after  being  asked  to  imagine  (i) \nfacing  the  cathedral,  and  (ii)  facing  in  the  opposite direction.  Both patients,  in  both  (i) \nand  (ii),  predominantly recalled  buildings  that  would  have  appeared  on  their right from \nthe specified viewpoint.  Since the  buildings recalled in  (i) were located physically on  the \nopposite side of the square to those recalled in (ii),  the patients'  long-term memory for all \nof the  buildings  in the  square  was  apparently  intact.  Further,  the  area  neglected rotated \naccording to  the patient's imagined viewpoint,  suggesting that their impairment relates to \nthe generation of egocentric mental images from a non-egocentric long-term store. \n\nThe model  also  addresses  how  information about object identity  is  bound to  locations in \nspace in long-term memory, i.e.  how the \"what\" and the \"where\" pathways interact.  Objec(cid:173)\nt information from  the ventral  visual  processing stream enters the hippocampal formation \n(medial entorhinal cortex) via the perirhinal cortex, while vi suo spatial information from the \ndorsal pathways enters lateral entorhinal cortex primarily via the parahippocampal cortex \n[9].  We  extend the  O'Keefe &  Burgess  [10]  hippocampal model to  include object-place \nassociations by encoding object features in  perirhinal cortex (we refer to these features as \ntexture,  but they  could also  be  attributes  such  as  colour,  shape  or size).  Reciprocal con(cid:173)\nnections to  the parahippocampus allow object features to  cue the hippocampus to  activate \na remembered location in an environment, and conversely, a remembered location can be \nused to reactivate the feature information of objects at that location.  The connections from \nparietal to  parahippocampal areas  allow  the remembered location  to  be  specified in  ego-\ncentric imagery. \n\nPost. parietal  ego <\u00b7>allo /ransla/um \n\nMedial parietal \negocentric locations \n\n\"'-~~~-:.\"\":f?:o\"ft \n-----~:~--% \nt ~entre \nr D::~clar \n~~gItt \n,  ,w':~ N(270) \nlocations  QV~ Far \n\u2022  \u2022  \u2022  \u2022 \n0000 -EE~----::;a.  \u2022  . . . . . . . . . .  . \n\n~-(~)---- __ (90) \n\nParahpc. \naUo.  object \n\nAI/ocentric dir. \n\n- -: ~ -;-t/ \n\nPerirhinal  objec//ex/ures \n\nHippocampal formation  au/o-assoc place rep. \n\nFigure 1:  The model architecture.  Note  the  allocentric encoding of direction  (NSEW) in \nparahippocampus, and the egocentric encoding of directions (LR) in medial parietal cortex. \n\n2  The model \n\nThe  model may  be  thought of in  simple  terms  as  follows.  An  allocentric representation \nof object location is extracted from  the ventral  visual  stream in the parahippocampus, and \nfeeds  into the hippocampus.  The dorsal visual  stream provides an  egocentric representa(cid:173)\ntion  of object location  in  medial parietal areas  and  makes bi-directional contact with  the \n\n\fparahippocampus  via  posterior  parietal  area  7a.  Inputs  carrying  allocentric  heading  di(cid:173)\nrection  information  [11]  project to  both parietal  and  parahippocampal regions,  allowing \nbidirectional translation from allocentric to  egocentric directions.  Recurrent connections \nin  the  hippocampus allow  recall from long-term memory  via  the  parahippocampus,  and \negocentric imagery in the medial parietal areas.  We now describe the model in more detail. \n\n2.1  Hippocampal system \n\nThe  architecture  of the  model  is  shown  in  Figure  1.  The  hippocampal formation  (HF) \nconsists of several regions - the entorhinal cortex, dentate gyrus, CA3,  and  CAl, each of \nwhich  appears to  code for space with  varying degrees  of sparseness.  To  simplify,  in  our \nmodel the HF is represented by a single layer of \"place cells\", each tuned to random, fixed \nconfigurations of spatial features as in [10,  12].  Additionally, it learns to represent objects' \ntextural features associated with a particular location in the environment.  It receives these \ninputs from the parahippocampal cortex (PH) and perirhinal cortex (PR), respectively. \n\nThe parahippocampal representation of object locations is  simulated as  a layer of neurons, \neach  of which  is  tuned  to  respond whenever there  is  a landmark at  a  given distance  and \nallocentric  direction from  the  subject.  Projections from  this  representation  into  the  hip(cid:173)\npocampus drive  the  firing  of place cells.  This  representation has  been shown  to  account \nfor  the  properties of place cells recorded across  environments  of varying  shape  and  size \n[10,  12].  Recurrent connections between place cells allow subsequent pattern completion \nin the place cell layer.  Return projections from the place cells to the parahippocampus allow \nreactivation of all  landmark location information consistent with the current location. \n\nThe perirhinal representation  in  our model  consists  of a  layer  of neurons,  each  tuned  to \na particular textural feature.  This region  is  reciprocally connected with  the  hippocampal \nformation  [13].  Thus,  in  our model,  object features  can be used to  cue the  hippocampal \nsystem to activate a remembered location in an environment, and conversely, a remembered \nlocation can activate all associated object textures.  Further, each allocentric spatial feature \nunit in the parahippocampus projects to the perirhinal object feature units so  that attention \nto one location can activate a particular object's features. \n\n2.2  Parietal cortex \n\nNeurons responding to specific egocentric stimulus locations (e.g.  relative to the eye, head \nor hand)  have  been  recorded  in  several  parietal  areas.  Tasks  involving  imagery  of the \nproducts of retrieval tend to activate medial parietal areas  (precuneus, posterior cingulate, \nretrosplenial cortex) in  neuroimaging studies  [14].  We  hypothesize that there is  a medial \nparietal egocentric map of space, coding for the locations of objects organised by distance \nand angle from  the  body midline.  In  this  representation cells  are  tuned to  respond to  the \npresence of an  object at  a specific  distance in  a specific egocentric direction.  Cells  have \nalso  been reported in  posterior parietal areas  with egocentrically tuned responses  that are \nmodulated by variables such as eye position [15] or body orientation (in area 7a [16]).  Such \ncoding can allow translation of locations between reference frames [17, 2].  We hypothesize \nthat area 7a performs the translation between allocentric and egocentric representations so \nthat, as  well as being driven directly by perception, the medial parietal egocentric map can \nbe  driven  by  recalled  allocentric  parahippocampal representations.  We  consider  simply \ntranslation between allocentric and view-dependent representations, requiring a modulato(cid:173)\nry input from the head direction system.  A more detailed model would include translations \nbetween allocentric and body,  head and eye centered representations, and possibly use of \nretrosplenial areas to buffer these intermediate representations [18]. \n\nThe translation between parahippocampal and parietal representations  occurs via a hard(cid:173)\nwired mapping of each to an  expanded set of egocentric representations, each modulated \n\n\fby  head direction so  that one is fully  activated for each (coarse coded) head direction (see \nFigure  1).  With  activation  from  the  appropriate head  direction unit,  activation  from  the \nparahippocampal or parietal representation  can  activate  the  appropriate  cell  in  the  other \nrepresentation via this expanded representation. \n\n2.3  Simulation details \n\nThe hippocampal component of the model was trained on the spatial environment shown in \nthe top-left panel of Figure 2, representing the buildings of the Milan square. We generated \na series  of views  of the  square,  as  would  be  seen  from  the  locations  in  the  central  filled \nrectangular region of this figure panel. The weights were determined as follows, in order to \nform a continuous attractor (after [19, 20]).  From each training location, each visible edge \npoint contributed the following to the activation of each parahippocampal (PH) cell: \n\nL J27fUang 2  e - 2uang \n\n( 9 i- 9j~2 \n\n1 \n\nX  V27fUdir(rj)2 \n\n(~i  - ~i)2 \n1 \ne - 2Udi~(~j)2 \n\n(1) \n\nj \n\nwhere ()i  and ri are the preferred object direction and distance of the ith PH cell, ()j  and rj \nrepresent the location of the jth edge point relative to  the  observer, and U ang  and U dir (r) \nare  the corresponding standard deviations (as  in  [10]).  Here, we used  uang  =  pi/48 and \nUdir(r)  = 2(r/l0)2.  The HF  place  cells  were  preassigned  to  cover a  grid  of locations \nin the environment, with each cell's activation falling off as  a Gaussian of the distance to \nits preferred location.  The PH-HF and HF-PH connection strengths were  set equal to  the \ncorrelations between  activations in  the parahippocampal and  hippocampal regions  across \nall training locations, and  similarly, the HF-HF weights were set to values proportional to \na Gaussian of the distance between their preferred locations. \n\nThe weights  to  the perirhinal  (PR) object feature  units - on  the  HF-to-PR  and PH-to-PR \nconnections - were trained by  simulating sequential  attention to each visible object, from \neach training location. Thus, a single object's textural features in the PR layer were associ(cid:173)\nated with the corresponding PH location features and HF place cell activations via Hebbian \nlearning.  The PR-to-HF weights were trained to  associate each training location with the \nsingle predominant texture - either that of a nearby object or that of the background. \n\nThe  connections  to  and  within  the  parietal component of the  model  were  hard-wired  to \nimplement the bidirectional allocentric-egocentric mappings (these are functionally equiv(cid:173)\nalent to a rotation by adding or subtracting the heading angle).  The 2-layer parietal circuit \nin Figure 1 essentially encodes separate transformation matrices for each of a discrete set of \nhead directions in the first layer.  A right parietal lesion causing left neglect was  simulated \nwith graded, random knockout to units in the egocentric map of the left side of space. This \ncould have equally been made to the trasnlation units projecting to  them  (i.e.  those in the \ntop rows of the PP in Figure 1). \n\nAfter pretraining  the  model,  we performed two  sets  of simulations.  In simulation  1,  the \nmodel was required to recall the allocentric representation of the Milan square after being \ncued with the texture and  direction (()j) of each  of the visible buildings in  turn, at a short \ndistance rj. The initial input to the HF, [HF (t = 0), was the sum of an externally provided \ntexture cue from the PR cell layer, and a distance and direction cue from the PH cell layer \nobtained by  initializing  the PH states  using  equation  1,  with rj  =  2.  A place  was  then \nrecalled by repeatedly updating the HF cells'  states until convergence according to: \nIHF (t)  =  .25IHF (t - 1) + .75 (WHF- HF  AHF (t - 1) + [HF (0)) \nAfF(t)  =  exp(IfF(t))/Lexp(IfF(t)) \n\n(2) \n\n(3) \n\nk \n\n.9IPH (t  -1) + .1WHF- PH AHF(t) \n\n(4) \n\n\fFin.ally, the HF Jlace cell. activity was  used to  perf~r,? patte:n .completi.on in  the. PH la~er \n(USIng  the wH  -PH weIghts), to recall the other vIsIble buIldIng locatIons.  In sImulatIOn \n2 the model was  then required to generate view-based mental images of the Milan square \nfrom various viewpoints according to a specified heading direction. First, the PH cells and \nHF place cells were initialized to the states of the retrieved spatial location (obtained after \nsettling in simulation 1).  The model was then asked what it \"saw\" in various directions by \nsimulating focused attention on the egocentric map, and requiring the model to retrieve the \nobject texture at that location via activation of the PR region. The egocentric medial parietal \n(MP) activation was calculated from the PH-to-MP mapping, as described above.  Attention \nto  a queried  egocentric direction  was  simulated  by  modulating  the  pattern  of activation \nacross  the  MP  layer with  a Gaussian filter centered on that location.  This activation was \nthen mapped back to  the PH layer, and in turn projected to  the PR layer via the PH-to-PR \nconnections: \n\n(5) \n(6) \n\nW HC - PR AHF + W PH- PR  A PH \n\nIPR \nAfR  =  exp(IfR)/ L exp(IfR) \n\n2.4  Results and discussion \n\nk \n\nIn simulation 1, when cued with the textures of each of the 5 buildings around the training \nregion,  the  model  settled  on  an  appropriate  place  cell  activation.  One  such  example is \nshown in Figure 2, upper panel. The model was cued with the texture of the cathedral front, \nand settled to  a place representation near to  its  southwest corner.  The resulting PH layer \nactivations show correct recall  of the locations of the  other landmarks around the  square. \nIn simulation 2,  shown in the lower panel, the model rotated the PH map according to  the \ncued heading direction, and was able to retrieve correctly the texture of each building when \nqueried with its egocentric direction. In the lesioned model, buildings to the egocentric left \nwere usually not identified correctly.  One such example is shown in Figure 2.  The heading \ndirection is to the south, so building 6 is  represented at the top (egocentric forward) of the \nmap.  The building to the left has texture 5, and the building to the right has texture 7.  After \na simulated parietal lesion, the model neglects building 5. \n\n3  Predictions and future directions \n\nWe have demonstrated how egocentric spatial representations may be formed from allocen(cid:173)\ntric  ones and vice versa. How might these representations and the mapping between them \nbe learned? The entorhinal cortex (EC) is the major cortical input zone to the hippocampus, \nand both the parahippocampal and perirhinal regions project to it [13].  Single cell record(cid:173)\nings in  EC indicate tuning curves  that are  broadly  similar to  those of place cells,  but are \nmuch more coarsely tuned and less specific to individual episodes [21, 9] . Additionally, EC \ncells can hold state information, such as a spatial location or object identity, over long time \ndelays and even across intervening items [9].  An allocentric representation could emerge if \nthe EC is  under pressure to  use a more compressed, temporally stable code to reconstruct \nthe  rapidly  changing  visuospatial  input.  An  egocentric  map is  altered dramatically  after \nchanges in viewpoint, whereas an allocentric map is  not.  Thus,  the  PH and hippocampal \nrepresentations could evolve via an unsupervised learning procedure that discovers a tem(cid:173)\nporally stable, generative model of the parietal input.  The inverse mapping from allocentric \nPH features to egocentric parietal features could be learned by training the back-projections \nsimilarly.  But how  could the  egocentric map in the parietal region be learned in  the  first \nplace? In a manner analagous to that suggested by Abbott [22], a \"hidden layer\" trained by \nHebbian learning could develop egocentric features in learning a mapping from  a sensory \nlayer representing retinally located targets and arbitrary heading directions to a motor layer \nrepresenting randomly explored (whole-body) movement directions. \n\n\fWe  note that  our parietal  imagery  system  might  also  support the  short-term  visuospatial \nworking  memory required in more  perceptual tasks  (e.g.  line  cancellation)[2].  Thus  le(cid:173)\nsions  here  would  produce  the  commonly  observed  pattern  of combined  perceptual  and \nrepresentational neglect. However, the difference in the routes by which perceptual and re(cid:173)\nconstructed information would enter this system, and possibly in how they are manipulated, \nallow for patients showing only one form of neglect[23]. \n\nSo far our simulations have involved a single spatial environment. Place cells recorded from \nthe  same  rat  placed  in  two  similar  novel  environments  show  highly  similar  firing  fields \n[10,  24],  whereas  after further exposure,  distinctive responses  emerge  (e.g.,  [25,  26,  24] \nand unpublished data).  In our model,  sparse random connections from the object layer to \nthe  place layer ensure  a high degree  of initial place-tuning that should generalize across \nsimilar environments.  Plasticity  in  the  HF-PR  connections will  allow  unique textures  of \nwalls,  buildings etc  to  be  associated with particular places;  thus  after extensive exposure, \nenvironment-specific place firing patterns should emerge. \n\nA  selective  lesion to  the  parahippocampus should  abolish  the  ability  to  make  allocentric \nobject-place associations altogether, thereby severely disrupting both landmark-based and \nmemory-based navigation. In contrast, a pure hippocampal lesion would spare the ability to \nrepresent a single object's distance and allocentric directions from a location, so navigation \nbased on a single landmark should be spared.  If an  arrangement of objects is  viewed in a \n3-D environment, the recall or recognition of the arrangement from a new  viewpoint will \nbe  facilitated  by  having formed  an  allocentric representation  of their locations.  Thus we \nwould predict that damage to the hippocampus would impair performance on  this  aspect \nof the task,  while memory for the individual objects would be unimpaired.  Similarly, we \nwould expect a viewpoint-dependent effect in hemispatial neglect patients. \n\nSchematized Milan Square  HR act given texture=1 \n\nPH  act + head dir \n\n. \nI \n\n\u00b7 ~.--\n\nMP activns with  neglect  PR  activations - Lesioned \n\nMP act + query dir \n\n-~ . \n. \n~ . \n\nPR  activations - Control \n\nf':L 00 \n\no \no \n\n0 \n5 \n\n10 \n\nTexture neuron \n\n0.: 1 \nO~ o \n\n10 \n\n5 \n\nTexture neuron \n\nFigure  2:  I.  Top  panel.  Left:  training  locations  in  the  Milan  square  are  plotted  in  the \nblack rectangle.  Middle:  HF  place cell  activations,  after being  cued  that  building  #1  is \nnearby and to the north.  Place cells are arranged in a polar coordinate grid according to the \ndistance and direction of their preferred locations relative to the centre of the environment \n(bright white spot).  The white  blurry  spot below and at the left end of building #1  is  the \nmaximally activated location. Edge points of buildings used during training are also shown \nhere. Right:  PH inputs to place cell layer are plotted in polar coordinates, representing the \nrecalled distances and directions of visible edges associated with  the maximally activated \nlocation. The externally cued heading direction is also shown here. II. Bottom panel.  Left: \nAn  imagined view  in  the  egocentric  map  layer  (MP),  given that  the heading direction  is \nsouth;  the  visible  edges  shown  above  have  been  rotated  by  180 degrees.  Mid-left:  the \nrecalled texture features in the PR layer are plotted in two different conditions, simulating \nattention to  the right (circles) and left (stars).  Mid-right and right:  Similarly, the MP and \nPR activations are shown after damage to the left side of the egocentric map. \n\n\fOne of the many curiosities of the hemispatial neglect syndrome is the temporary amelio(cid:173)\nration of spatial neglect after left-sided vestibular stimulation (placement of cold water into \nthe ear) and transcutaneous mechanical vibration (for a review,  see  [27]),  which presum(cid:173)\nably affects  the perceived head orientation. If the stimulus is evoking erroneous vestibular \nor somatosensory inputs to  shift the perceived head direction system leftward, then all ob(cid:173)\njects  will  now  be  mapped further rightward in  egocentric  space  and into the  'good side' \nof the parietal map  in  a lesioned model.  The model predicts  that this  effect will  also  be \nobserved in imagery, as is consistent with a recent result [28]. \n\nAcknowledgments \n\nWe  thank Allen  Cheung for extensive pilot simulations and John O'Keefe for  useful  dis(cid:173)\ncussions.  NB  is a Royal Society University Research Fellow.  This work was  supported by \nresearch grants from NSERC, Canada to S.B. and from the MRC, GB  to N.B. \n\nReferences \n\n[1]  E. Bisiach and C. Luzzatti.  Cortex, 14:129- 133, 1978. \n[2]  A. Pouget and T.  J. Sejnowski.  1.  Cog.  Neuro., 9(2):222- 237,  1997. \n[3]  E. Salinas and L.F. Abbott.  1. Neurosci., 15:6461-6474,  1995. \n[4]  E.A. Maguire, N.  Burgess, J.G.  Donnett, R.  S.J. Frackowiak, e.D. Frith, and J.  O'Keefe.  Sci-\n\nence, 280:921- 924, May 8 1998. \n\n[5]  N. Burgess, H. Spiers, E. Maguire, S. Baxendale, F.  Vargha-Khadem, and J. O'Keefe.  Subm. \n[6]  J. O'Keefe.  Exp. Neurol., 51:78- 109,1976. \n[7]  N. Burgess, K  Jeffery, and J.  O'Keefe.  In KJ. Jeffery N. Burgess and J. O'Keefe, editors, The \n\nhippocampal andparietalfoundations of5patial cognition. Oxford U. Press , 1999. \n\n[8]  A.D.  Milner,  H.e.  Dijkerman,  and  D.P.  Carey. \n\nIn  KJ.  Jeffery  N.  Burgess  and  J.  O' Keefe, \neditors, The hippocampal and parietal foundations of spatial cognition.  Oxford U. Press, 1999. \n\n[9]  w,A. Suzuki, E.K Miller, and R.  Desimone.  1.  Neurosci., 78:1062- 1081,  1997. \n[10]  J. O'Keefe and N. Burgess.  Nature,  381:425--428,  1996. \n[11]  J.S. Taube.  Prog.  Neurobiol. , 55:225- 256, 1998. \n[12]  T.  Hartley, N. Burgess, e. Lever, F. Cacucd, and J. O'Keefe.  Hippocampus , 10:369- 379,2000. \n[13]  w,A. Suzuki and D.G. Amaral.  1.  Neurosci.,  14:1856--1877,  1994. \n[14]  P.e. Fletcher, C.D. Frith, S.C. Baker, T.  Shallice, R.S.J. Frackowiak, and R.J. Dolan.  Neuroim-\n\nage, 2(3):195-200,  1995. \n\n[15]  R.A. Andersen, G.K Essick, and R.M. Siegel.  Science, 230(4724):456--458,  1985. \n[16]  L.H. Snyder, A.P. Batista, and R.A. Andersen.  Nature , 386:167- 170, 1997. \n[17]  D. Zipser and R. A. Andersen.  Nature, 331:679- 684, 1988. \n[18]  N. Burgess, E. Maguire, H.  Spiers, and 1.  O'Keefe.  Submitted. \n[19]  A. Samsonovich and B.L. McNaughton. 1.  Neurosci.,  17:5900--5920,  1997. \n[20]  S. Deneve, P.E. Latham, and A. Pouget.  Nature Neuroscience , 2(8):740--745, 1999. \n[21]  GJ. Quirk, R.U. Muller, J.L. Kubie, and J.B. Ranck.  I Neurosci, 12:1945- 1963,  1992. \n[22]  L.F.Abbott.  Int.  1.  ofNeur. Sys.,  6:115- 122,1995. \n[23]  C. Guariglia, A. Padovani, P. Pantano, and L. Pizzamiglio.  Nature , 364:235-7,1993. \n[24]  C. Lever, F.  Cacucd, N. Burgess, and J.  O'Keefe. In Soc. Neurosci. Abs.,  vol. 24. , 1999. \n[25]  E. Bostock, R.U. Muller, andJ.L. Kubie.  Hippo. , 1:193-205, 1991. \n[26]  R.U. Muller and J.L. Kubie.  1. Neurosci, 7:1951-1968, 1987. \n[27]  G.  Vallar.  In KJ. Jeffery  N.  Burgess  and J. O'Keefe,  editors,  The  hippocampal and parietal \n\nfoundations  of spatial cognition. Oxford U. Press, 1999. \n\n[28]  C. Guariglia, G. Lippolis, and L. Pizzamiglio.  Cortex, 34(2):233-241 , 1998. \n\n\f", "award": [], "sourceid": 1916, "authors": [{"given_name": "Suzanna", "family_name": "Becker", "institution": null}, {"given_name": "Neil", "family_name": "Burgess", "institution": null}]}